[Libav-user] Encoding/decoding subtitles, some code samples or tutorial is needed
it seems that ffmpeg doxygen API docs have minimal information about
handling subtitles, compared to video/audio,
so it would be nice to look at some code examples or tutorial covering
full cycle of preparing subtitles, encoding, putting into container, demuxing,
decoding and presenting on the screen to user.
If I understand correctly,
it should be enough to fill AVSubtitleRect.text field with desired text,
put that into AVSubtitle.rects, fill other fields, pass AVSubtitle to
avcodec_encode_subtitle() and put resulting "subtitle_out" buffer into AVPacket,
is it right?
After decoding, I should get AVSubtitle struct with AVSubtitleRect, containing
text and bitmap (probably both).
Re: Encoding/decoding subtitles, some code samples or tutorial is needed
El 24/05/17 a las 05:55, Anton Sviridenko escribió:
> Is subtitle always rendered to bitmap? What format is used for bitmap? How can I
> draw it on the screen? Do I always need libass library to do that?
Subtitles are rendered to a bitmap only if they are dvdsubs (.VOBs). In
that case, inspecting AVSubtitle.rect and drawing what's there should be
enough (check as ffplay does it). There's a problem with colors, as the
color palette is not present in VOBs but in the INF of DVDs.
If not, the subtitle appears as text (.srt, .sub, etc). When the
subtitle appears as text, it is up to the application to draw it, either
using libass directly or the subtitle filter (thru a filter graph).