Module 4. Capture of photographic and video images



    1. 1.1 Formats, types and sizes of sensors.
    2. 1.2 Ratio relations
    3. 1.3 File types and compressions.
    4. 1.4 Shutter speeds and effects on the image.
    1. 2.1 Types of lenses.
    2. 2.2 Focal distances, fixed optics and zoom lenses.
    3. 2.3 Focal, formats, coverage angles.
    4. 2.4 Focus and depth of field.
    5. 2.5 Diaphragm and F numbers.
    1. 3.1 Relations between sensitivity, lighting, shutter speeds and diaphragm.
    2. 3.2 Composition of the frame.
    3. 3.3 Instant taking techniques.
    4. 3.4 Portrait techniques.
    5. 3.5 Techniques for capturing objects or people in motion.
    1. 4.1 Typology of video cameras and functionalities.
    2. 4.2 Video formats, compressions, codecs, types and sizes of sensors.
    3. 4.3 Recording media.
    4. 4.4 Aspect ratios 4: 3 and 16: 9; aspect ratio of the pixel.
    5. 4.5 Images per second and exploration.
    6. 4.6 Channels and audio options.
    7. 4.7 Integrated objectives and interchangeable optics.
    1. 5.1 Frame and focus.
    2. 5.2 Camera movements.
    3. 5.3 Luminance and color adjustments.
    4. 5.4 Routing of microphones and lines.
    5. 5.5 Monitoring and level adjustments.
    1. 6.1 Lighting equipment for photography and video.
    2. 6.2 Exposition.
    3. 6.3 Histograms.
    1. 7.1 Fragmentation and staging, organization of the space of the shot.
    2. 7.2 Arrangement of sequences and plans.
    3. 7.3 Identification of images and editing of metadata labels.
    4. 7.4 Technical characteristics of digital video recording systems.
    5. 7.5 Suitable record carriers for various image acquisition technologies.

Evaluation Criteria:

Film audiovisual pieces applying techniques of capturing photographic and video images, and reinforcing their expressiveness through the resources and technical means of audiovisual language.

 Evaluable learning standards:
1.1. Being able to compare the process of capturing images of the human eye and visual perception with the application transferred to the systems of visual capture and reproduction. 1.2. Being able to justify the effect of the lighting of the sequences to be captured by the audiovisual technical systems. 1.3. Being able to build the aesthetic and narrative composition of the photographic and video images to be captured, necessary for the production of pieces or simple audiovisual sequences. 1.4. Being able to have the photographic flashes or the "light lighting" necessary to adapt the lighting conditions of the scene to the photographic or video capture devices. 1.5. Being able to film with the video and photographic camera the shots, plans and sequences introducing the necessary color temperature, exposure, resolution, sound and metadata settings with the necessary information for identification. 1.6. Being able to choose the appropriate alternatives for recording on magnetic tape, optical discs, memory cards and hard disks that are suitable for various types of filming or audiovisual recording.


Formats, types and sizes of sensors.

Let's start with the very beginning, what is a sensor? Well no more or less than what makes in an electronic camera and especially in digital "acquires" the image we want to capture.

We take a picture with the digital camera and the image obtained by the lenses / lenses has to be transformed into data (sampling) by means of the photodiodes that the electronic part has. These data of the sample are called, as you have already assumed, pixels.

The sensors currently are usually CCD or CMOS.

Well, after knowing what the sensor is, let's see its formats, its types and its sizes. As we are with digital cameras, we will focus on the image sensors and their settings. The objective can have a field of vision, but what is going to be photographed depends only on what the sensor can pick up, so the excess (often light from the sides of the lens) will be ignored or eliminated.

The size (which determines the type) is important and goes from 1 / 2.5 "of the front cameras of the mobile to 35mm of the format" full frame. "There are major and intermediate things, especially the inch (1") of the reflex . The larger the size, the larger the pixel size that we capture and, therefore, the higher the quality of the photo.

The formats of the digital sensors are the SLR (single-lens reflex), the DSLR, the CCD 1/2 "(compact) and the" Bridge "(advanced compact), depending on the size as we have seen before and show that yes you can go from 1 / 1.7 "

Ratio relations

The photograph that we are going to capture is a box (a rectangle) and there is a relation to which we must attend, which is the one that occurs between the height and the width of the image.

A square would be 1: 1, for you to get used to the idea. For an 800px high image we would have 800px wide. Let's use vertical and horizontal to be more exact. In a 3: 2 ratio, if the vertical is 800px, its horizontal would be 1200px. The idea of using proportions is given by the need to "cut" or enlarge an image. That is to say; to pass it from one aspect ratio to another.

The commonly used relationships are 3: 2 (standard photo), 4: 3 (the standard in compact and smartphones) and 5: 4 (for large format).

If we go to panoramic mode, the standard relations are 16: 9 (not for photography, but for video), 2: 1 (only photography), 2.39: 1 (cinema only).

Also in photography we can count on the vertical relationship (2: 3 or 3: 4 are the result of placing the camera vertically), but they have no more mystery than applying the previous proportions.

By the way, since we are talking about proportions, what would the golden ratio be like? in aspect ratio? Well the 1.6: 1

And, how does the selection of appearance affect communication? A horizontal photo will always be more relaxed for the observer than a vertical one and the closer it comes to known formats (4: 3 or 16: 9) the less we will have to explain why we have opted for one format or another.

File types and compressions.

We already know what a sensor is, its types and the aspect ratio of the pixels. Now it's time to see what file formats exist to save the photographs and frames. Let's see the common ones.

The RAW format is the "raw" photo with all the pixels that the camera has achieved, so it is very heavy in the internal memory. It is a "digital negative" that has to be "revealed" and that does not have a standard (it even has different names, according to the camera - NEF, CRW, ARW, DNG ... - which are incompatible). In spite of all this, it is the best file to later treat the image (do not forget that there is not an image yet, since it needs a program (GIMP or Photoshop type) so that later we have an image.

The GIF format (Graphic Interchange Format) only allows 256 colors and is very simple. Ideal for Internet drawings and logos. As you surely know, there are also animated ones.

The PNG format (Portable Network Graphics) is a format with great compression, ideal for spot colors. Much better than GIF, but less standard, so you have to be careful if you use it for the Internet.

The Tagged Image File Format (TIFF) was the standard for digital cameras until JPEG took the cake. It is a very "heavy" format (come on, it takes up a lot of space on memory cards).

The JPEG (Joint Photographic Experts Group) format is the king of image formats. The acronym we recognize is JPG and is used intensively for the Internet. The camera compresses the image (which means a loss of quality / information), so when it is edited again in the same format, we lose even more quality. However, if the photo is made in this format, it occupies little in the internal memory and is already edited (color, background, contrast, etc.).

The PSD format is that of Photoshop. As the PNG is not a standard, but this program is used so much that there is not much problem when it comes to sharing it.

Shutter speeds and effects on the image.

The shutter speed is the time when the shutter is open (diaphragm grip) to capture the image on the camera's image sensor. We'll see the second fragments below, but I'll tell you that the longest exposure time I know is half a minute. A real outrage that can even be extended with a function called "bulb" (in the case that the camera has it), which opens the shutter until we stop pressing the shutter button. The result can be disastrous if we touch the camera.

Capturing the image at the correct speed is important not only to get a clear snapshot according to our original idea, but to create a series of effects on the image, which is partly the purpose of the explanation of this epigraph. The movement is captured almost instantaneously if the shutter speed is fast (capturing less light), but in some cases we will like to capture the movement (like trails of light produced by the headlights of the cars) for which we will need somewhat slower speeds ( catching more light).

To capture an image of an object, vehicle, person or animal moving and that is "frozen" we must add three things: a fast shutter speed, a high ISO and an aperture of the lens, ideally large. To create that sense of movement that I mentioned in the previous paragraph, we will use the opposite configuration.

Not only these variable speeds serve us for the sensation of movement or its omission. They also capture situations in low light or to capture photographs of the starry sky at night. Another interesting effect is the "silk effect" when we capture water at low speed and the possibility of "painting with light" when we use something that projects light in motion onto an inanimate object.

Speeds oscillate between 1/16000 and 1/4000 to freeze the movement with sharpness; 1/2000 and 1/30 for normal photographs and 1/15 to several seconds for motion effects. When the camera has what is called "speed B" instead of the specification in seconds, the only thing we have to remove is the one of the beginning of the number. That way 1/2000 remains in 2000 in B.


Types of lenses.

The image comes to the sensor with a lot of lenses. The reflexive cameras are divided into two, the true reflections, which can have an infinity of objectives, as they are screwed to the body of the camera and the compact ones, which have the objective of the factory and with it all functions are made.

The objectives of the reflex cameras are connected to the body of the camera by the mount. As you would expect with so many manufacturers, most of the objectives can not be screwed in more than in a certain type of camera. Exclusivity versus standardization.

Then we have to have the image circle, which, as we know, corresponds to what the objective lens captures and how it will be collected by the sensor. Think of a circular image that will be registered by a square or rectangular sensor. A good light frame will be left over and can negatively affect the recorded image.

Once we have the objective mounted and we know the image circle that it will produce, we must know its focal distance, which is a number in millimeters that determines the size of the image by the distance it captures the light from the horizon and the aperture - related to the F number, which we will see later - and which is a ratio between the diameter of the lens and the focal length. Finally, there are lenses that rotate discs of your lens, lengthen and shorten the distance between your internal lenses. It is a "zoom" that works like the focus, but without blurring and it is optical, not like the zoom of digital cameras on mobile phones.

Let's see the types of objectives, regardless of whether or not they can be separated from the camera. The standard objective is the one that reproduces the vision of the human eye: 50mm The telephoto lens captures focal distances greater than those of the human eye: 300mm The wide-angle lens captures focal distances lower than those captured by the human eye: focal length 22mm There are objectives that cover the three focal distances and are called polyvalent. For the taking of images of very small objects or that are very close to the objective, macro lenses are used. If you remember, we use them in an activity in the first evaluation. We can also remember the fish eye, which captures the image at 180º.

Focal distances, fixed optics and zoom lenses.

This is where the equipment becomes complex. We can have a reflex camera, but we need to carry a good number of lenses for each type of photography. With the compact, that does not happen, of course, but for that reason they are less versatile. The fixed optics - fixed focal or prime lens - are of higher quality for photographs that have a certain focal length. If we use an 80mm lens for portraits, we will achieve better results with these fixed optics than with a compact lens, however versatile that lens may be. Then come the problems. Removing and putting lenses supposes leaving the sensor uncovered, so that it can be filled little by little with dust. In addition, changing the lens every bit is a nuisance and many photographers, as you have seen, have two cameras with different lenses. Come on, a nuisance. A non-perfect solution is the use of zoom lenses, which serve several distances. They are somewhat worse when it comes to capturing light (they need more ISO in general and more knowledge from the photographer) and we lose clarity, but they are comfortable because with one objective we have photos of all kinds of distances.

Focal, formats, coverage angles.

The focal length is the one between the lens and the focal point. If, in addition, we have the angle collected at the focal point (regardless of the angle of the lens) we will have more or less image capture. I explain. A lens that collects the human field of vision will send the sensor the image that an eye would capture. If we change the lens to a wide angle, which would pick up more light, we have to approach the photographed object to get the same frame as with the normal lens (for example, 50mm).

Focus and depth of field.

We talked about clarity before. That is the focus and it depends on the distance of the object that we photograph or record with respect to the camera. Remember that it depends on the lenses (focal lengths) and the angle of coverage. The depth of field, in turn, will make the background objects more or less clear. When the depth of field is reduced (flat), the diaphragm will open a lot and the background will appear out of focus. On the contrary, if the depth is deep, the diaphragm will open very little and we will see the object that we photograph and the background with sharpness.

Diaphragm and F numbers.

We go into complications. The diaphragm, as we have seen, determines what will be the distance at which we will achieve clarity, modifying one more aspect in the focal distance, which is not limited to the angle of coverage. Think of your own eye. If you get a lot of light, you close your eyelids (you even frown) and your iris becomes tiny. That's what we do mechanically with the diaphragm. A very closed diaphragm captures the entire image clearly in a fraction of a second. If we have the pupil dilated, as it happens when we go to the oculist and want to see something from the back of the eye, the light bothers us, we see unfocused and everything seems to have halos of light around it.

Well, that aperture in the diaphragm of the camera determines the light that reaches the sensor and is measured in numbers f (focal ratio between aperture and focal length). The f numbers will vary from f / 16 (which is almost the diaphragm removed) to f / 2 which is the minimum aperture. Compare it with your vision and it would be as if the pupil were with the iris as a small dot in f / 2 and dilated by medication in f / 16. Its application leads to the fact that the higher the f number, the greater the depth of field and the light that enters will be more limited.


Relations between sensitivity, lighting, shutter speeds and diaphragm.

Everything really is quite related. Without being quite true we can apply this basic rule to know how to arrange the options of the camera to take pictures. Let's see. If the speed of the diaphragm is determined by the F-number (remember, 2 is a lot of aperture - sharp close-up, blurred background - and 22 very few - sharp close-up and background -); if the shutter speed determines the sharpness, especially the movement (1/1000 is to freeze the image and 1/2 is the silk effect or wake) and the sensitivity to light is measured with the ISO (ISO 100 is a drastic contrast between dark and light and the ISO 12800 would be a lot of noise and little contrast). If all this is so, we can relate the three aspects to enlightenment. For an image with a blurred background and sharp close-up, with a lot of illumination and no movement, the diaphragm speed, the shutter speed and the ISO should be low. On the contrary, in order to achieve clarity in all the planes, to capture the movement by freezing it and the somewhat weak illumination, the diaphragm speed, the obturation and the ISO should be high.

Composition of the frame.

The framing and the planes go hand in hand, so we already know about planes we can apply here. However, it is important to take into account the colors and shapes of the image that we want to capture. I call it "weight". What matters has to "weigh" more than the secondary. For this we would have to use the rule of thirds. Do you remember those games that came in the hobby magazines in which you had to copy a drawing simply by moving square to square what we saw in the original. As something similar we can get in our camera if we choose to activate the "guidelines" or "guides" (in English "guidelines"). Looking through the viewer we will have divided the image into nine frames. Well, what is important, what "weighs" will have to be in one of the thirds that create those guides and the point of attention in one of the intersections of those guides. It is not necessary to be in the center (which can be larger or less large than the other squares of the grid or grid, but it does occupy three of the squares / rectangles, even in L. But please, the rules are in these cases to break them if we have better ideas. Do not hesitate to do something more complex if you believe that the result will be more valuable.

Instant taking techniques.

What a memory! You may know the old Polaroid (Polaroid cameras are still made, but they are digital and far from the other compact cameras). Instagram has an icon that reminds Polaroid's rainbow, even square photos. When you went out at night with friends in past decades, we did not have phones or cameras to take group photos. Some traveling photographers with Polaroids offered to immortalize our moments of fun and in less than a minute we had a revealed photo in our hands. I even remember that they placed the camera inside out to take advantage of the flash effect from below. Tell me, what memories! This disappeared when in 2008 Polaroid stopped making the photographic film for these cameras. What you want to say today is the snapshot is that the photo has to come out the first time. Therefore we must prepare the photograph before taking it, it is not worth choosing the one that we like the most. The framing, the sensitivity depending on the light of the moment, the selection of the color to be collected ("color temperature") and try to capture with the highest quality (implying no compression, so better TIFF or RAW that JPEG) will make the snapshot of higher quality. It will never be as good as a well done portrait, with studio lighting, as many shots as necessary and photo editing, but almost. If you're at a sporting event or a concert, you'll know what I say when you take a picture and want to sell it to a magazine or newspaper.

Portrait techniques.

If there is a trade associated with photography, it is the photography studio. And there are photographs that are largely portraits of clients who hire us. That is why this section is fundamental. To begin, he poses the plan. Close-up, half-plane or very close-up plane. Even for a full-body portrait you will need to pose the angle. Once you have the decision (the person portrayed is going to give you clues in some cases without saying anything, since some of it will be striking) put the appropriate lenses at a distance (better always fixed if you are in the studio, but you can never have everything on the street), focus (take care if you want the background to be more or less clear) and be careful with the portrayed or portrayed. There are photographers specialized in children, who move a lot and are difficult to focus; in groups such as, for example, in the group photo of the Bachillerato class, in which you also have to give instructions to everyone to get the best out of it. And, what is the most important thing about a face? We may all coincide in the look. But there is always a margin for other possibilities. The lips, the nose or simply the hair. But you'll be right if you focus on the model's look. Do not forget to think about whether it is better to take a front (fontal) or profile picture. In the case of the profile, there are several possibilities but think of the face as a geometric figure or a prism with two sides to the right and two others to the left in addition to the front. In addition, the profile allows rear shots (with the back facing or sideways). If the portrait includes part of the body or the whole body, we should think about the posture or pose, in addition to what we have seen in the face. With a model or a professional model, you'll have many steps ahead, but with the general public, you'll have to be creative. Maybe that fantastic pose that they propose to you is poor and does not offer anything. Naturalness will allow you to find better moments and poses, so it is convenient for you to allow the model to feel comfortable and safe, almost as if we were not portraying him, to get the best of him or her. Then comes the fund. If you are in a studio, you have already thought about having a backdrop or adding various elements. Have you seen the vintage photos in which some very striking chairs and vases of flowers appear? Well, that, transferred to our time will be to dress the bottom of something that attracts us without causing the face to be lost and in having an appropriate lighting. On the one hand, we must correctly illuminate the face to avoid unwanted shadows (never from the front and always from different points) and, on the other hand, make the light part of the photograph, as well as the background. By the way, I have not said anything about the flash and the times, but it will be necessary to see it later. This section can continue for a long time and occupy more than one course, so we must leave it here. Read and practice everything you can, because it is fundamental.

Techniques for capturing objects or people in motion.

The objective of this section can not be just how an object or a person is captured if it is moving, that is; how to achieve these two goals, but how to get to the moment, with the movement. The shutter speed is the fundamental element for this type of photographs. If we use high speeds (1/2000, for example) the object or the person will be frozen in the image, as if they floated. If it is our goal, fantastic. But if we want to capture the movement, the shutter speed should drop drastically, even below what should be logical first (even below 1/10). With this long exposure, the movement is like a wake in the image. In addition, the more light the sensor captures and the more stable the camera is, the better the possibility of capturing the movement and the object or the person to be portrayed will be sharper. Afterwards, it is convenient to consider the frame. Leaving the object or the person in motion right in the middle gives space to travel. For example, if we have a cyclist that starts from our left and goes to the right of the image, it will always be better to leave it complete in the first vertical third, a little in the center and leave the right third empty. Much easier is all this if we use video. With a sweep, a movement captured with the camera stopped, with a forced zoom and especially with the perspective (from below, from above, in aberrant plane, etc.) we will not only capture the object or the person, but also the movement per se.


Typology of video cameras and functionalities.

We already talked about the basic types of digital photo cameras (compact and replaceable lens, mainly). Let's see now the video ones. We can apply the same distinction as in the photos, since in many cases we will use a DSLR camera interchangeably to take pictures and capture video, but in the professional world, there are always specialized cameras. I encourage you to think of two types of camcorders: the portable and the fixed. If we are going to shoot a scene in which we have to move, the camcorders are of medium or small size, since we have to be able to carry them over the shoulder. But if we are in a studio or filming a movie, the highest quality and no need to move a "mole" will make you opt for large cameras. When the camera is portable, it usually incorporates all kinds of gadgets to not require more external equipment. For example a microphone for audio and a lamp for lighting. And all this very compact and small, so the result can be semi-professional, but not very advanced. Now it has been greatly improved, since previously it needed space for the recording and conservation of the captured images and video tapes were used. A mamotreto called camcorder. With the CDs the weight and the volume were reduced and I do not want to tell you anything with the memory cards. Then there are the study cameras, which are very light since they have nothing more to capture the images. The audio jack is made by microphones and the lighting is in charge of other professionals. The cameras are on tripods or cranes for traveling and are connected to the center console by wiring. By the way, it would not be fair to end without talking about the Super-8 movies, the "tomavistas". A delight in his time, with mini movies that almost directly reach the home projectors.

Video formats, compressions, codecs, types and sizes of sensors.

We saw JPEGs, RAWs, GIFs and other photo formats. Now we complete with the video ones. In video we have fewer formats, so it is easier to find the standard in the AVI (Audio Video Interleaved) format. It is of high weight in the computer and employs a DV codec, which for compression can use DivX or XviD. I think all video playback programs are capable of reading it, with or without compression. The other great standard is the MPEG (Moving Pictures Expert Group). It will remind you of JPEG and more or less the same. The video is compressed automatically and has some numbers associated with it. The "1" is for CD quality, the "2" for DVD, the "3" for audio compression and the "4" for compression in web pages where the weight should be the minimum possible. The two large software companies have their own formats that are the WMV (Microsoft) and the MOV (Apple - QuickTime) with compressors and particular codecs, all designed for the network and finally those of Real Time (RT) and Adobe Flash (FLV) which is a standard for Google products (YouTube, for example). We saw the sensors in the corresponding section when we saw the cameras.

Recording media.

Today, this section seems outdated, since it only seems that we have digital recording to SD cards or hard disk, but we must know each one of the supports used in video over time both to know them and to see if they can be adjusted to our current needs. The initial recording (as long as the camera recorded and was not a mere transmitter between the image and the production console) was on celluloid, just as for the cinema and we had it in the mainstream, mainly. But once the cameras became popular and the domestic video appeared, the recording was switched to electromagnetic tape, just like audio tapes. The systems varied, but the principles were the same. We had the VHS, the Betamas, and the Video 2000 as the first standards until the VHS dominated the market. Then came the small-sized tapes for each of those formats and that required an adapter to be seen on the VCR - the video consoles of each of those formats - until the digital tape arrived and revolutionized everything. The data was now recorded in numerical format and the image that was seen was faithful to the one that had been collected, avoiding degradation by the tape itself. Then it went to digital support. First with the recording on disk, both CD and DVD and soon went to card recording, such as compact Flash. The main problems were, the recording speed - a lot of data to put in no time in a digital support - and the recording capacity - few minutes of video per card. The recording to hard disk (HD) solved part of the problem, but the cable was necessary to get the data and now we have SD memory cards of up to 128 Gigas, one pass to record everything we want in a small space.

Aspect ratios 4: 3 and 16: 9; aspect ratio of the pixel.

The standard formats for video are 4: 3 (the old television or "fullscreen") and 16: 9 (the cinema format or "widescreen"). When we record for one, we will have problems for the other. Today, 16: 9 is used in all video formats, since computer monitors, televisions, mobile phones and many video game consoles share this format with movies, so 4: 3 has fallen in disuse, but at the beginning of the digital format, and influenced by the domestic use of these technologies in the 80s and 90s, you had the possibility of moving from one to another according to your needs. It was even possible to buy a movie in VHS or DVD format both in 4: 3 and in 16: 9. For us, at present, it is a minor problem, but we are going to see how the aspect ratio is maintained, how it is solved that in 4: 3 there is part of the 16: 9 image that can not be seen and what calculations we should perform for it. We have to calculate the decimal of each of them. For 4: 3 it will be 1.33: 1 and for 16: 9 it will be 1.77: 1. According to the quality of the image (number of pixels) we will have to perform one operation or another to move from one relationship to another. In addition, the television pixel is rectangular (2: 1) versus the square pixel (1: 1) of digital devices, such as the computer or the consoles. Honestly, there is little chance of success and it is better that the computer program (or the mobile application we manage) do it by itself. The result will be as follows: a 16: 9 image in 4: 3 will lose the side margins (if you record in 16: 9 knowing that there is a possibility of going to 4: 3, do not put anything relevant in the external thirds of the image) and if you go from 4: 3 to 16: 9, the screen will expand the center and eliminate the upper and lower thirds. Another option is black stripes. From 16: 9 to 4: 3, you will have black stripes on the top and bottom of the screen and in the step from 4: 3 to 16: 9, the black stripes will be on the right and left sides.

Images per second and exploration.

The cinematographic and video image is composed with the effect produced by a large number of static photographs projected in fractions of a second. The frames per second that an eye of a human being already confuses with movement are approximately twelve. The standard of 24 images per second was reached in the 30s of the last century (with the entrance of the sound cinema). Refresh rates (the same as "frames per second") of up to 300 have been tested. Nowadays, 48p or 50p are around because of the problems that have always been experienced with televisions, and that leads us to this strange concept of "exploration". By exploration, in this section, we refer to interlaced scanning, to avoid the flickering of tube televisions. Have you noticed that when a tube television was recorded on video, you always had annoying and very wide strips that prevented a clear shot, even when you did not point to television? Well that's the blink that wants to avoid exploration. The video cameras, faster than the human eye, noticed and recorded it. They are due to the sweep of the cathode ray sweep. Nowadays it is quite solved, since televisions have time to store the image and project it safely, even with duplicates.

Channels and audio options.

It all started with the ability to record and play audio. Since the sound source is focused, it started with a mono channel. Soon we saw that we have two ears and that thanks to this we can detect the origin of the sound and space the sound sources in our perception, we went to the stereo. All good, especially for Music and dialogues until you enter the dimension of sound. The auditory 3D was very good and is more than enough, but it was necessary to expand its field. Hence the X.1 systems, where X is replaceable by the number of speakers (4.1, 5.1, 7.1, etc.) to divide right and left and the subwoofer. When we record, we have to take audio channels into account, especially if there are more than two. To recreate the stereo sound, we will take the signal with two microphones or with one that allows capturing the sound sources separately. For more channels, the microphone and the work of the mixer (number of collection channels) will be greater.

Integrated objectives and interchangeable optics.

We finish this huge section with something we already know. The current SLR cameras are differentiated by the possibility of having the lens integrated and be one piece (compact) or by having the camera on one side and the objective on the other (the most advanced). Obviously, if the objective is interchangeable, we do not have to limit ourselves to what the factory camera offers and we can exchange different optics, but the initial investment will be greater and we will have the problems of the threads, since the more we change the objectives, the greater will be the possibilities that the lanes give of themselves.


Frame and focus.

The frames are already seen in another block, so let's focus on the focus. When we frame, we decide how we are going to portray a person, an object or a landscape. With the focus we decide on what we are going to fix the attention and what we are going to make the spectator stand out. It is a matter of clarity, but also of composition. We will take into account the depth of field and the focal distance so that what is important to us is clear, regardless of what is in front or behind. We will have to think about if what we want to focus on is static, can move unpredictably (children's portrait) or does not stop being in movement (sports photography). We will control all the previous aspects (luminosity, framing and composition) and finally we will frame.

Camera movements.

You will remember that the cameras and the lenses could be moved in different ways, using the coordinates of the image (up, down, right, left) and then they could move in a crane or trolley. And after all this, we had the zoom, which is a non-physical movement, but an optical one. Let's now see all in an organized way. The physical movements are the panoramic ones (horizontal and vertical), the travellings and the rotations. The optical movements are those that are produced by changes in the disposition of the lenses in the lenses. They are the Zoom and the Focus (we saw it in the focus paragraph). In both cases, we change the viewer's perspective. The digital movement uses the image taken and modifies it. The two most common are the digital zoom, which differs from the optical zoom in that it really is a cut-out of the image. In low quality cameras, we will have an important pixilation. The second of these digital movements is produced by the sensor stabilizer. When the camera makes adjustments to stabilize, there is a displacement that is still something that we tend to despise, it can be attractive, because it fits like a mini-travelling and generates movement. It's like in a movie it's spies in which we see from the spy's camera how he takes photos of who is hanging around and we see how he adjusts automatically, but moving and focusing the image.

Luminance and color adjustments.

These settings are used to modify the quality of the image. As the photographed image is actually light, these adjustments can serve to improve, even modify, the photographed reality. A photograph captures a series of color ranges. In black and white they oscillate between one and another and all shades of intermediate grays and in color will capture the entire range that the sensor can pick up. This is where the luminance settings come into play. What this control does is to make a certain band of the light spectrum brighter (or less), highlighting or softening the lights and colors of the photograph and achieving various objectives within an image. When we maximize a color, it will be bright and faded, while when we take it to the minimum, it will be dark and dull. Imagine that you want to highlight an object, the color of the sea or the sky, highlight or extinguish the background and, above all, let the colors of the landscapes shine like never before. The color adjustments have to be made with care, since the final result can be tinted and excessively falsified. The balance of colors can not be broken (unless it is our artistic goal) if we want to find a realistic image. With these adjustments we will get to modify the approaches, the filters, the intensities, the saturation of the tone and in digital equipment, we will be able to select a single color to modify it in front of the others.

Routing of microphones and lines.

We will devote more time in a block after the microphone and different shots, but we will advance some material. The title of this section refers to the two types of inputs that a table uses to pick up an audio signal. The first, the microphone is a fairly wide circular entrance with three holes to insert the microphone, usually to record the microphone socket. The second is the line input to connect what we know by jack (caliber 6.35 mm, because the lower calibers are not suitable for professional use) and that, although we can find microphones that use this input (balanced or not) it is usually for music instruments. They are analogue inputs, because currently it is usually possible to connect digital inputs, mainly optical ones, also - and in some cases exclusively.

Monitoring and level adjustments.

Once we have connected the audio inputs, the mixer will leave the volumes in optimal positions for video and audio editing at the same time. By monitoring we understand the sound settings made by the sound technician to get all signals, regardless of their balance, while the sound level adjustment will make the sounds are provided in volume and space with respect to each other . In a music concert, this looks pretty good and serves as an example, although it is not the audiovisual field itself. In those situations, the microphones are placed, the electric instruments pass through the line to the mixer and the sound technician, when he has the perfect signal of each element in an individualized way, arranges the partial volumes of each instrument or singer so that they sound balanced In making sound for an audiovisual work, the microphones will be placed strategically in the same way, to capture all the possible audio (whether the dialogues such as the ambient sound) and outside the visual field of the cameras so that they are not captured in the image. The sound technician will pick them up first to see if everything is captured optimally and finally (whenever possible, outside the live recording) will try to mix them, adjusting them to the images.


Lighting equipment for photography and video.

The lighting equipment is divided into continuous light sources, such as halogen lamps, tungsten lamps, fluorescents or LEDs, which illuminate what we are photographing and point lighting, such as the flash, both from our camera and other placed in different points with remote activation (from where you are to take the image) for that photographic or video shot. In the case of flash, we can count on the small ones that fit the camera or with the studio ones, which are as big as the continuous lights. There are some intermediate lights that are called modeling and they work continuously, but they simulate what we are going to get with the flashes. The continuous light equipment is large, heavy and consumes a lot of energy. In some cases, they are also expensive, although it is becoming easier to obtain economic equipment with acceptable benefits. Apart from all this, you will have seen that reflective screens and umbrellas (including umbrellas) are used that reflect the light of a fountain so that other places that are not directly illuminated. In other cases they soften the lighting, especially the flashes or just the opposite, as is the case of the diffusers that attach directly to the light source. Then we have the color filters, in case we do not want to apply a filter to the whole photograph, but we do want to have shades of light in the background or in what we are recording or photographing. And a blank back screen to create a neutral background. And, of course, sunlight. Possibly the best if we have it at the time of the exhibition. An open window can be useful for some shots, possibly not for a professional photo studio, but for our homemade productions.


We frame an image that we want to photograph, we focus it, we use the photometer to measure the light that we have and that will modify our camera settings. And now that? We can think first about how to make the exhibition. Do not forget that we are doing digital photography, so the pixels can be affected by the adjustments, creating what we call "noise" in a previous block. To avoid that, we can have a high exposure time, in order to capture the light in our camera's sensor optimally and without omitting data for it. The colors that we photograph will be registered at pleasure. We can give a correct time to the exhibition or, on the contrary, force the capture to a greater degree of exposure to capture more light or less, to turn off brightness and darken the image. To control this, we already know two techniques; the ISO and the shutter speed.


You will have seen those statistical bar diagrams where a value is represented as a rectangle. You will remember it when you were doing statistics of temperatures (like a line) and rainfall (with these bars that I mention). Well, in photography we use this type of diagrams to represent with a kind of "very rocky mountain" a repetition of light values ​​in an image. All digital cameras of higher or lower quality have one and we must know how to read it. As we are in digital photography and video, the value of the vertical axis is the number of pixels and the horizontal is that of the range of luminosity (not color). It is seen in black and white (although I have seen them in color in good cameras and it helps a lot) to tell us if we have more pixels from a certain strip than others. At the end of the day it is a statistical representation. It has a lot to do with the exposition of the previous paragraphs and we count on the left with the dark tones and on the right the light ones, to be between the neutral ones or of medium character. With this we determine the exhibition. Help with contrasts and avoid decompensated "peaks" of a specific range that unbalance the image.


Fragmentation and staging, organization of the space of the shot.

When we talked about the creation of a script, you will remember that there was a "normality" that we had to make present to the viewer from the beginning. The staging corresponds to that "fantasy come true". We create or recreate a world in which the narrative events of the script will happen naturally, even in the most powerful of fictions. For this, after reading the technical script, the space where the actors will perform their work in front of the camera will be recreated, taking in some case part of the topics of the genre for which it is being shot. If all the development of the script has been done correctly, the people in charge of the cameras will know where to stand and everything will be orchestrated in a completely effective way. For this it will be necessary to fragment the space according to the actions, organizing the space for each shot, regardless of the number of cameras that record the performance. The scenery, the sets, costumes, makeup, props and lighting are very important elements in this point of the realization, so all the members of these teams, including the technicians, should be coordinated. Then the sound - dialogue, background and ambient sounds and the musical soundtrack - will be coupled to create the unit that is the scene.

Arrangement of sequences and plans.

We already know the planes and what is a sequence (even that it is a sequence plane, but that is not what this section is about). Now we have to see how to order them in a cinematographic realization. Prior to assembly, the shots must be classified to be later reordered in the final edition. As if it were a puzzle, but more as if they were pieces of a construction game, since the pieces are interchangeable, the shots can be mixed in different ways and even better than those originally planned. Let's see now how these shots can be cataloged for later order.

Identification of images and editing of metadata labels.

The photographs will be stored in a data library that we must tag and catalog to find without problems what we have registered. To do this, the names of the photographs (photos_of_January 001, for example) and the built-in metadata are used. In this metadata we can find location, author, date, etc. The cameras have a series of options for editing these labels, but if we want to be sure, there are computer programs that modify them to our liking. Finally, we must be careful with the metadata that are associated with the image or video automatically, because they can have "sensitive" and personal information of ours. And we will not know that we are giving it without our consent if we are not sure of what we want to catalog in them and what we want to keep hidden or not register.

Technical characteristics of digital video recording systems.

By video recording system is understood not only the camera or cameras that record, but all that device that contributes its function to the record chain and in short, usually refers to recording studios. When we think of a television studio, we think of several cameras sending the image to a single console where the director mixes with the selection of the different points of view offered by the different positions. This multicamera recording requires the possibility of having several monitors (or one divided into several screens in a quadrant) for the simultaneous display of the different cameras. If we want to complicate the system further, we can add cameras in different places or places, such as when there is a live broadcast with a headquarters in the central studio of a chain and several journalists covering a news item in foreign countries, even in different countries. The different signals must reach the same central unit and mix as if everything came from the same place. Everything is recorded on a hard disk (both the individual and the mixed shots) and the result is a single product to be offered, but a multitude of data that has been added to its creation.

Suitable record carriers for various image acquisition technologies.

We have spoken in part of these supports. As you will remember, the video focuses on high quality (avi) and Internet quality (mp4). The record carriers will be the sites that we can use to maintain those files. To this day, the option is clear; hard drives and SD cards. The refresh rate of these devices is important and the higher the better. But that is almost no problem, since the highest quality is increasingly affordable and affordable.

Versión en Castellano