Video compression technologies, along with the storage mediums they reside within, are an area of constant research and development from various quarters, including big business, academia, as well as hobbyist communities. With the strive to provide the highest quality content in the most convenient form possible, new formats regularly emerge as the current preferred choice to suit a certain purpose.
Currently though, the basic building blocks of video files are consistent, and understanding those core concepts is an essential step in discerning what's what in the world of video.
Below is a simplified 3 stage "life cycle" of a video file, showing the encoding, storage (in the file), and playback stages.
Containers are file formats which are designed to contain varying kinds of encoding, usually of just audio, or audio and video. When a media player tries loading a container file, it first reads the header information which includes a summary of details about the information contained inside that file, such as the codecs used, and the bitrate and length of the streams.
Video files are generally multimedia. That is, they contain video information, and the associated audio - the soundtrack.
These are described as the video stream and audio stream respectively, or occasionally as the video and audio tracks.
Codecs encode information for storage or transmission, and subsequently decode it for playback.
Video files usually contain both video and audio streams, which can be created with various combinations of audio and video codecs and stored within a single container.
To successfully play a particular video file, the media player software used must have access to the codecs necessary to decode the streams present in the file.
Bitrate is the amount of bits per second used to encode a stream. For instance a 2000 kb/s video stream uses 2000 kilobits per second of video, which equates to about 150 MB of disk space to store a 10 minute video (in this case with no audio streams).
Each stream in a file has it's own distinct bitrate.
A video is made up of frames. Viewed by themselves, frames are still (that is, not moving) images, but when updated rapidly each second, frames become a moving image to the human eye. How many times a second the frames are updated is called the framerate.
Keyframes are used in compressed video formats which tell the codec useful information about the frames coming after them. This allows the frames in between keyframes to be compressed into a much smaller amount of data than if they each were a full description of the whole image. Often the in-between frame information can just be a list of the differences between the new frame and the previous one.
In files designed for maximum compression, the amount of keyframes can be kept to a minimum, and a keyframe might only be used when the entire scene changes between one frame and the next, thus requiring a keyframe to describe fully the frames to come. This can present difficulties when a user wants to jump to different part of the video; the player might have to spend a long time finding the last keyframe, then decoding all the frames from there to the desired spot in the video.