Subtitling discussion

we want subtitling, that's for sure, but how can it work most effectively?

Hi there,

just checked out the features list and here is a question about the subtitling. It says in the wishlist:

  • subtitles of every clip (linked to the clip)
  • joined subtitle of the subtitles in the playlist

I’m not sure that this is a very practical way of going about it. There’s a few reasons:

  • subtitling clips before they are used in a video, will mean that clips are subtitled that will not be used, and full clips are subtitles when only segments of them are used, thus creating work that is pretty boring and keeping people from actually producing more videos.
  • I don’t understand how the subtitles of a clip would be merged into subtitles of a video, as there would be two conflicting timecodes
  • I assume we want timecoded subtitle files, not video layers, that are easy to translate into more languages
  • Wouldn’t the clip description be enough in the first place, in combination with a communication tool. Say there is a clip in Italian, the video is good, the description is good, but I don’t speak Italian. SO I would “request” someone to transcribe and subtitle that clip? Thus avoiding the additional work of clips being subtitled and not used later

I’m pretty sure there are subtitling tools online that would do a lot of what we want it to do. Maybe it makes sense to look at what’s there for the using and seeing if we can integrate that into the workflow?

It generally seems like it might be easier to focus the subtitling on finished videos, rather than clips, as the incentive to put in the work will be a lot higher, plus it’s easier to organise. It would also enable us to offer the subtitiling tools also to films produced elsewhere, as in I see an awesome movie and want to have subtitles in a particular language, so I can add this as a subtitle project, without dealing with the whole project management.

Just some thoughts, not sure how conclusive, coherent they are.


this feature comes from the old idea of the “diary of a movement”-cms (where it would have been necesary to subtitle it if you want to upload it to the project) and has much to do with the critic about i will point out the pro’s i see in this kind of subtitling-management:

1.) a video – even as raw-material – is useless for your project if you don’t get what its about.
there is a lot of material on wich is totaly useless because it does not have subtitles.
therefore video-material without subtitles are nearly going no-where
2.) video-material which is yet subtitled is easier to translate to other languages and therefore easier to use in new video-projects.
3.) the most dificult part of subtitling in your language is to first make a subtitle in an foreign language and then translate it. if we animate the people to load up their material already with a subtitle (even in their own language and not english or spanish) the process of translating gets easier and faster.
4.) if we subtitle only the ready-to-publish movie and not the clips there is no direct or visible conection between the subtitle and the original clip – therefore if another project wants to use the same clip it has to re-do the same work
5.) if we manage the workflow like with these clips you always split the work into smaller pieces. that is exactly what we need to get more people participating in the project. in fact it is much easier to get help if you send them a piece of it all (like a clip) then the whole project at once.

about the time-code-mixing:
i know that this is a feature – but it is tecnicaly posible. at least as long as we don’t do frame-exactly but second-exactly timecodes (which is in most of the case enough) the code for mixing up a timecode out of a playlist of clips with subtitles should be easy. (example: first clip has 5:30 min, so all timecode of second clip has to be 330 seconds later, third clip 330 + seconds of second clip and so on)

if only parts of the clip where used the subtitle should nevertheless be done at the raw clip and be conected with it so that the clip could be used by other people out of the pool later on. but that does not mean that you have to subtitle the whole clip or translate the whole subtitle: just the parts you need. but every work which has been done is saved and not lost, so that another project could use that work and eventualy complete it.

i see the most problems with the diferent versions of the clips – how to manage it automaticaly that not only the clip but also the subtitle is cutted corectly if we do not use the same software (e.g. online-editor like the one from kaltura). but i think that this could be managed “by hand”. (i do not know not one software by the way that can handle subtitles this way – is there? would be nice…)


quick response:
why don’t we use subtitling for actual translation work, and for writing up what people say in their language transcribing. It’s easier than each time explaining what we are talking about.

1) I would have precise descriptions of clips in the upload form, which combined with the actual video should give people a pretty good idea of what’s happening. Also some communication tool that allows people to ask for transcripts/subtitles for specific clips they are interested in could be really useful. I’d be much more inspired to spend the time on it, if someone was already interested in it.

3) to motivate people to add transcripts is a great thing, but personally, I wouldn’t be contributing much. The 30 seconds it takes me to write a description of a clip versus the 10 minutes it would take to transcribe it make a huge difference.

4) interesting, yeah, subtitles should totally be linked to the original clip

5) I don’t think I really get what you mean with this?

for the timecode, I guess it’s similar even if you do it with frames, they add up just the same. the question is what happens if you move clips around in a project and change transition lengths and things like that. Really it’s a geek question for someone who understands what they’re talking about.

I guess one of the basic question is what kind of subtitles we are talking about?


regarding the message on irc:
because if we use cinelerra we could use the generated xml-file to get the point where to split, cut and edit automaticaly subtitles of a clip

Final Cut can create edl’s and xml’s. I assume avid can do the same, but not sure. Sy prbly knows about that. I have zero experience with using xml’s or edl’s cross editing platform, so that would need extensive testing somehow.


i dont like the term transcript because in my opinion its just a not-finished subtitle (what they say without a timecode) – if you mix it with a timecode (WHEN do they say it) you have a finished subtitle – no matter in which language. (even an english-subtitle for english-spoken audio is a subtitle and not only a transcript)
therefore i just work with subtitles when editing videos – transcripts i use at home for scientifical use and stuff like that where i do not need to know when the spoken was said. for our project i don’t see any reason to use transcripts therefore – just subtitles.

1) is nice, but a finished subtitle would be nicer.
3) yeah, but with the description you can’t work – so you end up just searching for somebody who could transcript, subtitle and then translate it. so if i contribute a clip myself in an international project it is obvious that a clip with english subtitle will be easier to use than a clip in portuguese with a good (english?) description. i am not against a good description, but this is only usefull to find out which clips i would like to use – not to choose which one i realy use. example: if there are two clips about the eviction of landless-movement in south-brasil, one with a very good description, the other one with a very small description but an portuguese-subtitle i think the second one would be choosen because its a lot of work done yet and so you don’t have to push too much work inside.

5) example: you want to clean a squat after a big party. if you ask somebody “hey, wanna help cleaning the building?” the chance to get help is less than by asking “hey, we are cleaning the building, you wanna help by cleaning the corner over there?”

and now about the new questions:
what kind of subtitles we are talking about?

surely not about layer (like youtube-crap) but about meta-subtitles which later on can be used in .mkv’s or on dvd’s. i prefer the .srt’s because they are easy to handle with, but in the end i dont care which type they are of.

what about the timecode when stretching, cutting and moving clips?
i hope everyone understands what i’m talking about if i use the term clip. if you look at the second workflow you see that every clip is part of the whole project (movie or show). but of course the clip is not only moved in the playlist and then you have the end-product (this was the way the old newsreal concept worked) but it gets cutted, maybe even splitted and the parts are used on two different positions in the end-product (that is what normaly happens with interviews, isn’t it?)
as you see, this could be troublesome to the subtitles. because i don’t know not one editing-program which handles automaticaly the subtitles of the clip. well, in the “normal” world you would not need such a feature as subtitling is normaly the last thing you do – after all editing is done.
in our project this is not the case – or at least i would not want that we have to follow this way.
therefore i played around with the xml-file of cinelerra and i think it would be quite easy to handle the subtitles with the xml-file (which is nothing but an edl). or better to say: to get an imc-videolab-subtitle-edl out of it which we can use in our onlineplatform. and as mara sais there are similar files produced by final cut and maybe even avid. so what we would need is writing a module which translates the diferent files to one imc-videolab-edl (which our player can handle to make a playlist out of it)

and to handle the problem that we don’t know every filetype of every programs edl we should also write a module to write an edl by hand.

wow – this is getting more complicated than i first thought but it seems possible. i will make another workflow to show how it is meant later on, maybe it will get clearer by that. also if you look at it its not just about the subtitles but about the whole workflow how to edit the project. maybe it would overload the program if we do a real non-linear-editor also online (as described above) but i think we should experiment with it at least and for handling the subtitles it should not be too much.


Subtitles help to understand the essence of foreign audio, but as for me, a full-fledged dubbing and translation is much better. This puts the viewer at ease, helps them relax and enjoy the video. On a subconscious level, this is perceived as the fact that the video was translated specifically for his understanding. Moreover, now I met several services for voice acting and video translation. For example, Vidby I have not yet been able to use it myself, but according to reviews, their machine-made voice acting is in no way inferior to human.