FFmpeg by Example
FFmpeg By Example is a community-driven platform showcasing various multimedia techniques using FFmpeg, inviting contributions, and allowing users to try examples online, supported by community donations.
Read original articleFFmpeg By Example is a documentation platform designed to illustrate various methods of utilizing FFmpeg, a powerful multimedia framework. The site encourages community contributions, inviting users to share their unique ideas and examples. Among the examples provided are techniques for printing text files to standard output, extracting multiple video clips from a single input, listing supported audio and video encoders, and analyzing video frames and metadata using the showinfo filter. Other examples include drawing text and boxes on videos, extracting raw keyframes, cutting audio files, and generating videos from images at specific frame rates. The platform also features examples for creating erratic camera movement effects and extracting frames to JPG files. Users are encouraged to improve existing examples and can try them online. The initiative is supported by donations from the community.
- FFmpeg By Example showcases diverse FFmpeg usage techniques.
- The platform invites community contributions and improvements.
- Examples cover a wide range of multimedia tasks, including video and audio manipulation.
- Users can try examples online and share their own.
- The initiative is supported by donations from users.
Related
Ask HN: Share your FFmpeg settings for video hosting
A user is developing a video hosting site allowing MP4 uploads, utilizing H.264 for video and Opus for audio. They seek advice on encoding settings and challenges faced in the process.
Show HN: Video editor app that generates FFmpeg commands
Newbeelearn has launched a free offline video editing tool using FFmpeg commands, allowing users to edit videos, adjust elements, and export commands, though audio playback is not supported in previews.
Generate video sprites using just FFmpeg
The article outlines creating video sprites with FFmpeg using a shell script. It details calculating video duration, extracting frames, and generating a sprite sheet for web integration.
- Many users express reliance on AI tools like ChatGPT to generate FFmpeg commands, highlighting the complexity of FFmpeg's syntax.
- Some users criticize the website for its examples being overly complicated or poorly organized, making it less user-friendly.
- There is a call for better documentation and resources, with suggestions for creating a centralized repository for FFmpeg commands.
- Users share personal experiences and tips for using FFmpeg effectively, including performance optimizations and alternative tools like GStreamer.
- Several comments emphasize the need for community-driven contributions to improve the FFmpeg learning experience.
Weird thing is I got better performance without "-c:v h264_videotoolbox" on latest Mac update, maybe some performance regression in Sequoia? I don't know. The equivalent flag for my windows machine with Nvidia GPU is "-c:v h264_nvenc" . I wonder why ffmpeg doesn't just auto detect this? I get about 8x performance boost from this. Probably the one time I actually earned my salary at work was when we were about to pay out the nose for more cloud servers with GPU to process video when I noticed the version of ffmpeg that came installed on the machines was compiled without GPU acceleration !
[0] https://gist.githubusercontent.com/nielsbom/c86c504fa5fd61ae...
[1] https://gist.githubusercontent.com/jazzyjackson/bf9282df0a40...
Lately, I've been playing around with more esoteric functionality. For example, storing raw video straight off a video camera on a fairly slow machine. I built a microscope and it reads frames off the camera at 120FPS in raw video format (YUYV 1280x720) which is voluminous if you save it directly to disk (gigs per minute). Disks are cheap but that seemed wasteful, so I was curious about various close-to-lossless techniques to store the exact images, but compressed quickly. I've noticed that RGB24 conversion in ffmpeg is extremely slow, so instead after playing around with the command line I ended up with:
ffmpeg -f rawvideo -pix_fmt yuyv422 -s 1280x720 -i test.raw -vcodec libx264 -pix_fmt yuv420p movie.mp4 -crf 13 -y
This reads in raw video- because raw video doesn't have a container, it lacks metadata like "pixel format" and "image size", so I have to provide those. It's order dependent- everything before "-i test.raw" is for decoding the input, and everythign after is for writing the output. I do one tiny pixel format conversion (that ffmpeg can do really fast) and then write the data out in a very, very close to lossless format with a container (I've found .mkv to be the best container in most cases).Because I hate command lines, I ended up using ffmpeg-python which composes the command line from this:
self.process = (
ffmpeg.
input(
"pipe:",
format="rawvideo",
pix_fmt="yuyv422",
s="{}x{}".format(1280, 720),
threads=8
)
.output(
fname, pix_fmt="yuv422p", vcodec="libx264", crf=13
)
.overwrite_output()
.global_args("-threads", "8")
.run_async(pipe_stdin=True)
)
and then I literally write() my frames into the stdin of that process. I had to limit the number of threads because the machine has 12 cores and uses at least 2 at all times to run the microscope.I'm still looking for better/faster lossless YUV encoding.
https://www.ffmpegbyexample.com/examples/l1bilxyl/get_the_du...
Don’t call two extra tools to do string processing, that is insane. FFprobe is perfectly capable of giving you just the duration (or whatever) on its own:
ffprobe -loglevel quiet -output_format csv=p=0 -show_entries format=duration video.mp4
Don’t simply stop at the first thing that works; once it does think to yourself if maybe there is a way to improve it.It's not a great name and not very discoverable, but there's a lot of very useful ffmpeg-by-example snippets there with illustrated results, and an explanation of what each option in each example does.
For reference:
One-liner:
> ffmpeg -loglevel info -f concat -safe 0 -i <(for f in *.mkv; do echo "file '$(pwd)/$f"; done) -c copy output.mkv
Or the method I ended up using, create a files.txt file with each file listed[0]
> ffmpeg -f concat -safe 0 -i files.txt -c copy output.mkv
files.txt
> file 'file 1.mkv' > file 'file 2.mkv' > # list any additional files
The closest I seem to be able to get is to divide the file size by the file length, add some wiggle room and then split it based on time. Any pointers appreciated.
$ helpme ffmpeg capture video from /dev/video0 every 1 second and write to .jpg files like img00000.jpg, img00001.jpg, ...
$ helpme ffmpeg assemble all the .jpg files into an .mp4 timelapse video at 8fps
$ helpme ffmpeg recompress myvideo.mp4 for HTML5-friendly use and save the result as myvideo_out.webm
I know there are full blown AI terminals like Warp but I didn't like the idea of a terminal app requiring a login, possibly sending all my commands to a server, etc. and just wanted a script that only calls the cloud AI when I ask it to.So changed the verbosity to trace ffmpeg -v trace -f data -i input.txt -map 0:0 -c text -f data -
---snip-- [dost#0:0 @ 0x625775f0ba80] Encoder 'text' specified, but only '-codec copy' supported for data streams [dost#0:0 @ 0x625775f0ba80] Error selecting an encoder Error opening output file -. Error opening output files: Function not implemented [AVIOContext @ 0x625775f09cc0] Statistics: 10 bytes read, 0 seeks
I was expecting text to be written to stdout? What did I miss?
Can we have a best of HNN and put it on there, or vote on it, or whatever?
there was one time I didn't use pyaudio correctly so I was using this process where ffmpeg can stitch multiple audio files together into one passed in as an array cli argument, crazy
Currently looking for an FFmpeg related job https://gariany.com/about
Anyway long story short, instead of the usual terrifying inline ffmpeg filter tangle. the filter can be structured however you want and you can include it from a dedicated file. It sounds petty, but I really think it was the thing that finally let me "crack" ffmpeg
The secret sauce is the "/", "-/filter_complex file_name" will include the file as the filter.
As I am pretty happy with it I am going to inflect it on everyone here.
In motion_detect.filter
[0:v]
split
[motion]
[original];
[motion]
scale=
w=iw/4:
h=-1,
format=
gbrp,
tmix=
frames=2
[camera];
[1:v]
[camera]
blend=
all_mode=darken,
tblend=
all_mode=difference,
boxblur=
lr=20,
maskfun=
low=3:
high=3,
negate,
blackframe=
amount=1,
nullsink;
[original]
null
And then some python glue logic around the command ffmpeg -nostats -an -i ip_camera -i zone_mask.png -/filter_complex motion_display.filter -f mpegts udp://127.0.0.1:8888
And there you have it, motion detection while staying in a single ffmpeg process, the glue logic watches stdout for the blackframe messages and saves the video.explanation:
"[]" are named inputs and outputs
"," are pipes
";" ends a pipeline
take input 0 split it into two streams "motion" and "original". the motion stream gets scaled down, converted to gbrp(later blends were not working on yuv data) then temporally mixed with the previous two frames(remove high frequency motion), and sent to stream "camera". Take the zone mask image provided as input 1 and the "camera" stream, mask the camera stream, find the difference with the previous frame to bring out motion, blur to expand the motion pixels and then mask to black/white, invert the image for correct blackframe analyses which will print messages on stdout when too many motion pixels are present. The "original" stream get sent to the output for capture.
One odd thing is the mpegts, I tried a few more modern formats but none "stream" as well as mpegts. I will have to investigate further.
I could, and probably should have, used opencv to do the same. But I wanted to see if ffmpeg could do it.
While as a concept, I absolutely love "X by Example" websites, this one seems to make some strange decisions. First, the top highlighted example is just an overly complicated `cat`. I understand that it's meant to show the versatility of the tool, but it's basically useless.
Then below, there's 3 pages of commands, 10 per page. No ordering whatsoever in terms of usefulness. There looks like there's an upvote but it's actually just a bullet decoration.
There's also a big "try online" button for a feature that's not actually implemented.
All in all, this is a pretty disappointing website that I don't think anyone in this thread will actually use, even though everyone seems to be "praising" it.
You push the input files, the command, and fetch the output when done.
Right now, I am looking to normalize some audio without using ffmpeg-normalize, a popular Python package. Nothing against it on a personal level, I just ... want to know what is going on, and it's a lot of files and lines of code to do what is basically a two-pass process.
I have a growing interest in metadata and that's also a case which I do not find is often well-addressed.
Related
Ask HN: Share your FFmpeg settings for video hosting
A user is developing a video hosting site allowing MP4 uploads, utilizing H.264 for video and Opus for audio. They seek advice on encoding settings and challenges faced in the process.
Show HN: Video editor app that generates FFmpeg commands
Newbeelearn has launched a free offline video editing tool using FFmpeg commands, allowing users to edit videos, adjust elements, and export commands, though audio playback is not supported in previews.
Generate video sprites using just FFmpeg
The article outlines creating video sprites with FFmpeg using a shell script. It details calculating video duration, extracting frames, and generating a sprite sheet for web integration.