This can be a highly technical and complex subject full of interactions with a variety of optical and mechanical measurements and functions. What we’re presenting here is a brief overview of just these two aspects of a machine vision system (as well as any other kind of photography).
Well, obviously, that’s a hole. That’s what the word means.
Specifically, in this case, it’s the hole through which light travels to get to the image sensor in the camera. Relating to our own biology, the aperture is like the pupil of an eye letting light in to hit the rods and cones of the retina (our image sensor).
Also, like our own eye, where we have an iris controlling the amount of light which gets through, a camera has an aperture stop (often shortened to, and confused with the, aperture) controlling the size of the aperture, and thus how much light gets to the image sensor. One of the main functions of this is to prevent saturation of the sensor. External to the camera, of course, this can also be controlled by using vision system lighting techniques.
There is another aspect to the control offered by the aperture stop is to determine how straight and directed (technically: collimated) the light reaching the image sensor is. In this way it also affects the Depth of Field (which we will discuss in a moment).
You may have seen the “aperture” of a camera lens being specified as an “f” number similar to: f/8.
>>> This is actually a ratio of the focal length to the effective aperture diameter. The more observant of you will have noticed that this is written like a fraction: “f” over (or divided by, if you like) 8 (or another number). Well done. What it means is the larger the number, the smaller the overall fraction, and thus the smaller the aperture.
The “f-stops” on a lens will usually be denoted in factors of âˆš2 (approx. 1.41), which relates to a factor of 2 in light intensity (change the diameter by âˆš2, change the area letting the light in by 2, yes? Good).
Moving on, then, to…
Depth of Field
Unlike “aperture”, the meaning of the term “depth of field” (DoF) is not so obvious.
The DoF starts at the nearest “sharp” object and extends to the farthest sharp object the camera can see.
Now, obviously there isn’t a sharp cut-off line where something is suddenly out of focus, it is a gradual change. That change can be altered by using a different DoF: a large DoF brings the entire image into focus, a small DoF emphasises a particular object in a scene and de-emphasises everything in front and behind the object (the foreground and background).
The DoF is a function of various factors, most notably: the magnification at the image sensor, and the lens aperture.
When the magnification is increased, the DoF decreases (foreground and background tend to become less “sharp”); decreasing magnification will increase the DoF.
The aperture works the same way: increased aperture (decreasing f number) gives a decreased DoF; decreased aperture, increased DoF (because the light reaching the image sensor is more collimated).
That last element is intimately involved in machine vision, for example, where the correct lighting (be it bar light or LED backlight, or whatever) can highlight the features you want to image, which helps the camera image the objects you are actually interested in.
Where we’re used the term “sharp” in these circumstances, it is shorthand for “acceptably sharp”. Any lens can only focus on one point at any given time, however, as we’ve discussed the sharpness of an image can be altered, at least to the point where it is visually indistinguishable from the “true” sharp point.
Acceptably sharp does have a definition. If the image that the lens casts of an object on the image sensor, through the aperture (which you can guess actually has the shape of the aperture stop, roughly circular) is less than a pixel, it is acceptably sharp. There are a whole bunch of formulae and far more rigorous descriptions which underpin that idea, however, more important is the effect.
Where the object appears larger than a pixel it starts to become visually blurry. The line where this happens (also an aggregate of the shape of the aperture stop) is called the “circle of confusion” and is generally where the usefulness of an image stops, especially when talking about machine vision identification and/or recognition.
Some of the other factors which can be involved in this subject are the format size, focal length and distance to subject, movement (of camera and objects) as well as what it is we’re trying to image.
We hope you’ve gained an idea of the terms we’ve outlined, here, and also an appreciation of the complexity which can come into dealing with optics at this level.