fireball

if you are interested to learn photography with easy way you can join with us and learn step by step in on line and more tips of helth and beautycare

Sunday, May 27, 2007

2. HOW A DIGITAL CAMERA WORKS

Digital cameras are very much like the still more familiar 35mm film cameras. Both contain a lens, an aperture, and a shutter. The lens brings light from the scene into focus inside the camera so it can expose an image. The aperture is a hole that can be made smaller or larger to control the amount of light entering the camera. The shutter is a device that can be opened or closed to control the length of time the light enters.


The Nikon Coolpix 4300 looks a lot like a traditional film camera.

The big difference between traditional film cameras and digital cameras is how they capture the image. Instead of film, digital cameras use a solid-state device called an image sensor, usually a charge-couple device (CCD). On the surface of each of these fingernail-sized silicon chips is a grid containing hundreds of thousands or millions of photosensitive diodes called photosites, photoelements, or pixels. Each photosite captures a single pixel in the photograph to be.


An image sensor sits against a background enlargement of its square pixels, each capable of capturing one pixel in the final image. Courtesy of IBM.

The exposure
When you press the shutter release button of a digital camera, a metering cell measures the light coming through the lens and sets the aperture and shutter speed for the correct exposure. When the shutter opens briefly, each pixel on the image sensor records the brightness of the light that falls on it by accumulating an electrical charge. The more light that hits a pixel, the higher the charge it records. Pixels capturing light from highlights in the scene will have high charges. Those capturing light from shadows will have low charges.

When the shutter closes to end the exposure, the charge from each pixel is measured and converted into a digital number. The series of numbers can then be used to reconstruct the image by setting the color and brightness of matching pixels on the screen or printed page.



When the shutter opens, light strikes the image sensor to form the image. Courtesy of Canon.

It's all black and white after all
It may be surprising, but pixels on an image sensor can only capture brightness, not color. They record only the gray scale-a series of 256 increasingly darker tones ranging from pure white to pure black. How the camera creates a color image from the brightness recorded by each pixel is an interesting story.



The gray scale contains a range of 256 tones from pure white to pure black.


What is color?
When photography was first invented, it could only record black and white images. The search for color was a long and arduous process, and a lot of hand coloring went on in the interim (causing one photographer to comment "so you have to know how to paint after all!").


Smiling faces to greet you : mirroring contentment from within. (Title taken from label with hand-colored print.) An unidentified group of six people, two of whom are children (2 families?), standing in front of a possibly newly constructed sod house with a pitched sod roof, stovepipe, two windows and a door showing. With the people is a dog. One of the women is wearing a flat straw hat with a large ribbon. Likely taken in North Dakota.


"Fred Hultstrand copy of a photo printed from a glass plate. Glass plate borrowed from Howard O. Berg, Devils Lake, N.Dak. Brought in by Morris D. Johnson, Bismarck, N.Dak."--Back of hand-colored print. Photo likely taken by Job V. Harrison of Rock Lake, N.D. Courtesy of the Library of Congress.

One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be created using black and white film and red, blue, and green filters. He had the photographer Thomas Sutton photograph a tartan ribbon three times, each time with a different color filter over the lens. The three black and white images were then projected onto a screen with three different projectors, each equipped with the same color filter used to take the image being projected. When brought into register, the three images formed a full color photograph. Over a century later, image sensors work much the same way.

Colors in a photographic image are usually based on the three primary colors red, green, and blue (RGB). This is called the additive color system because when the three colors are combined or added in equal quantities, they form white. This RGB system is used whenever light is projected to form colors as it is on the display monitor (or in your eye).





RGB uses additive colors. When all three are mixed in equal amounts, they form white. When red and green overlap, the form yellow, and so on. To see how this works, visit Konica's interactive presentation by clicking the MoreInfo button below.

From black and white to color
Since daylight is made up of red, green, and blue light, placing red, green, and blue filters over individual pixels on the image sensor can create color images just as they did for Maxwell in 1860. In the popular Bayer pattern used on many image sensors, there are twice as many green filters as there are red or blue filters. That's because a human eye is more sensitive to green than it is to the other two colors so green's color accuracy is more important.


Colored filters cover each photosite on the image sensor so the photosites only capture the brightness of the light that passes through. The lenses on top of each pixel are used to collect light and make the sensor more sensitive. Courtesy of Fuji


With the filters in place, each pixel can record only the brightness of the light that matches its filter and passes through it while other colors are blocked. For example, a pixel with a red filter knows only the brightness of the red light that strikes it. To figure out what color each pixel really is, a process called interpolation uses the colors of neighboring pixels to calculate the two colors that the pixel didn't record directly. By combining these two interpolated colors with the color measured by the site directly, the full color of the pixel can be calculated. "I'm bright red and the green and blue pixels around me are also bright so that must mean I'm really a white pixel." It's like a painter creating a color by mixing varying amounts of other colors on his palette. This step is computer intensive since comparisons with as many as eight neighboring pixels is required to perform this process properly.

Here the full color of a green pixel is about to be interpolated from the eight pixels that surround it.

There's a computer in your camera
Each time you take a picture millions of calculations have to be made in just a few seconds. It's these calculations that make it possible for the camera to preview, capture, compress, filter, store, transfer, and display the image. All of these calculations are performed by a microprocessor in the camera that's similar to the one in your desktop computer.

1. WHAT IS A DIGITAL PHOTOGRAPH?






Pixels-dots are all there are
Digital photographs are made up of hundreds of thousands or millions of tiny squares called picture elements-or just pixels. Like the impressionists who painted wonderful scenes with small dabs of paint, your computer and printer can use these tiny pixels to display or print photographs. To do so, the computer divides the screen or printed page into a grid of pixels. It then uses the values stored in the digital photograph to specify the brightness and color of each pixel in this grid-a form of painting by number. Controlling, or addressing a grid of individual pixels in this way is called bit mapping and digital images are called bit-maps.







Here you see a portrait of Amelia Earhart done entirely in jelly beans. Think of each jelly bean as a pixel and it's easy to see how dots can form images.




Image size :
The quality of a digital image, whether printed or displayed on a screen, depends in part on the number of pixels used to create the image (sometimes referred to as resolution). More pixels add detail and sharpen edges.
If you enlarge any digital image enough, the pixels will begin to show-an effect called pixelization. This is not unlike traditional silver-based prints where grain begins to show when prints are enlarged past a certain point. The more pixels there are in an image, the more it can be enlarged before pixelization occurs.


The photo of the face (right) looks normal, but when the eye is enlarged too much (left) the pixels begin to show. Each pixel is a small square made up of a single color.


The size of a photograph is specified in one of two ways-by its dimensions in pixels or by the total number of pixels it contains. For example, the same image can be said to have 1800 x 1600 pixels (where "x" is pronounced "by" as in "1800 by 1600"), or to contain 2.88-million pixels (1800 multiplied by 1600).

This digital image of a Monarch butterfly chrysalis is 1800 pixels wide and 1600 pixels tall. It's said to be 1800x1600.

photography scince

Pictures posing questions: -- the next steps in photography could blur reality

1) When a celebrity appears in a fan-magazine photo, there's no telling whether the person ever wore the clothes depicted visited that locale. The picture may have been "photoshopped," we say, using a word coined from the name of the popular image-editing software Adobe Photoshop.
But today's image processing is just a prelude. Imagine photographs in which the lighting in the room, the position of the camera, the point of focus, and even the expressions on people's faces were all chosen after the picture was taken. The moment that the picture beautifully captures never actually happened. Welcome to the world of computational photography, arguably the biggest step in photography since the move away from film.
Digital photography replaced the film in traditional cameras with a tiny wafer of silicon. While that switch swapped the darkroom for far more-powerful image-enhancement software, the camera itself changed little. Its aperture, shutter, flash, and other components remained essentially the same.
Computational photography, however, transforms the act of capturing the image. Some researchers use curved mirrors to distort their camera's field of view. Others replace the cameralens with an array of thousands of microlenses or with a virtual lens that exists only in software. Some use what they call smart flashes to illuminate a scene with complex patterns of light, or set up domes containing hundreds of flashes to light a subject from many angles. The list goes on: three-dimensional apertures, multiple exposures, cameras stacked in arrays, and more.
In the hands of professional photographers and filmmakers, the creative potential of these technologies is tremendous. "I expect it to lead to new art forms," says Marc Levoy, a professor of computer science at Stanford University.
Medicine and science could also benefit from imaging techniques that transcend the limitations of conventional microscopes and telescopes. The military is interested as well. The Defense Advanced Research Projects Agency, for example, has funded research on camera arrays that can see through dense foliage.
For consumers, some of these new technologies could improve family snapshots. Imagine fixing the focus of a blurry shot after the fact, or creating group shots of your friends and family in which no one is blinking or making a silly face. Or posing your children in front of a sunset and seeing details of their faces instead of just silhouettes.
Since the late 1990s, inexpensive computing power and improvements in digital camera technology have fueled research in all these areas of computational photography. Levoy says that scientists "look around and see more and more everyday people using digital cameras, and they begin to think, 'Well, this is getting interesting.'"
ROBOTS TO SUPERHEROES Computational photography has roots in robotics, astronomy, and animation technology. "It's almost a convergence of computer vision and computer graphics," says Shree Nayar, professor of computer science at Columbia University.
Attaching a video camera to a robot is easy, but it's difficult to get the robot to distinguish objects, faces, and walls and to compute its position in a room. "The recovery of 3-D information from [2-D] images is kind of the backbone of computer vision itself," Nayar says.
Other important optics and digital-imaging advances have come from astronomy. In that field, researchers have been pushing boundaries to view ever-fainter and more-distant objects in the sky. In one technique, for example, the telescope's primary mirror continuously adjusts its shape to compensate for the twinkling effect created by Earth's atmosphere (SN: 3/4/00, p. 156).
Rapid progress in computer animation during the 1980s and 1990s provided another cornerstone of the new photography. The stunning visual realism of modern animated movies such as Shrek and The Incredibles comes from accurately computing how light bounces around a 3-D scene and ultimately reaches a viewer's eye (SN: 1/26/02,p. 56). Those calculations can be run in reverse--starting from the light that entered the lens of a camera and tracing it back--to deduce something about the real scene.
Such calculations make it possible to decode the often-distorted images taken by these unconventional cameras. "What the computational camera does is it captures an optically coded image that's not ready for human consumption," Nayar explains. By unscrambling the raw images, scientists can extract extra information about a scene, such as the shapes of the photographed objects or the unique way in which those objects reflect and absorb light.
PHOTO FUSION One powerful way to do computational photography is to take multiple shots of a scene and mathematically combine those images. For example, even the best digital cameras have difficulty capturing extreme brightness and darkness at the same time. Just look at an amateur snapshot of a person standing in front of a sunlit window.
Compared with a single photo, a sequence of shots taken with different exposures can capture a scene with a wide range of brightness, called the dynamic range. Both a bright outdoor scene and the person in front of it can have good color and detail when the set of images is merged. The method was described by Nayar and others at a conference in 1999.
In a similar way, a series of frames in which the focus varies can produce a single, sharp image of the entire scene. Both these types of mergers can be arduously performed with standard image-editing software, but computational photography automates the process.
A related technique fuses a series of family portraits into a single image that's free of blinking eyes and unflattering expressions. After using a conventional camera to take a set of pictures of a group of people, the photographer might feed the pictures into a program described during a 2004 conference on computer graphics by Michael Cohen and his colleagues at Microsoft Research in Redmond, Wash.
The user indicates the photos in which each face looks best, and the software then splices them into a seamless image that makes everyone attractive at the same time--even though the depicted moment never happened. This software is now being offered with a high-end version of Microsoft's Windows Vista.
Want that family photo in 3-D? Nayar's group takes three-dimensional pictures with a normal camera by placing a cone-shaped mirror, like a cheerleader's megaphone, in front of the lens. Because some of the light from an object comes directly into the lens and the rest of the light first bounces offspots inside the cone, the camera captures images from multiple vantage points. From those data, computer software constructs a full 3-D model, as Nayar's group explained at the SIGGRAPH meeting last year in Boston.

2) A mirrored cone on a video camera might be especially useful to capture an actor's performance in 3-D, Nayar says.
Another alteration of a camera's field of view makes it possible to shoot a picture first and focus it later. Todor Georgiev, a physicist working on novel camera designs at Adobe, the San Jose, Calif. -based company that produces Photoshop, has developed a lens that splits the scene that a camera captures into many separate images.
Georgiev's group etched a grid of square minilenses into a lens, making it look like an insect's compound eye. Each minilens creates a separate image of the scene, effectively shooting the scene from 20 slightly different vantage points. Software merges the mini-images into a single image that the photographer can focus and refocus at will. The photographer can even slightly change the apparent vantage point of the camera. The team described this work last year in Cyprus at the Eurographics Symposium on Rendering.
In essence, the technique replaces the camera's focusing lens with a virtual lens.
LIGHT MOTIFS The refocusing trick made possible by Georgiev's insect-eye lens can also be achieved by placing a tiny array of thousands of microlenses inside the camera body, directly in front of the sensor that captures images.
Conceptually, the microlens array is a digital sensor in which each pixel has been replaced by a tiny camera. This enables the camera to record information about the incoming light that traditional cameras throw away. Each pixel in a normal digital camera receives light focused into a cone shape from the entire lens. Within that cone, the light varies in important ways, but normal cameras average the cone of light into a single color value for the pixel.
By replacing each pixel with a tiny lens, Levoy's research team developed a camera that can preserve this extra information. Mathematically, say the researchers, the change expands the normal 2-D image into a 'light field" that has four dimensions. This light field contains all the information necessary to calculate a refocused image after the fact. Ren Ng, now at Refocus Imaging in Mountain View, Calif., explained the process at a 2005 conference.
Capturing more information about incoming light waves can also create powerful new kinds of scientific and medical images. For example, Stephen Boppart and his colleagues at the University of Illinois at Urbana-Champaign create 3-D microscopic photos by processing the out-of-focus parts of an image.
The team devised software to examine how a tissue sample, for instance, bends and scatters light. In the February 2007 Nature Physics, the researchers describe how the device uses that information to discern the structure of the tissue. "What we've done is take this blurred information, descramble it, and reconstruct it into an in-focus image," Boppart says.
In computational photography, the flash becomes more than a simple pulse of light. For example, a room-size dome built by Paul Debevec of the University of Southern California in Los Angeles and his colleagues makes it possible to redo the lighting of a scene after it's been shot. Hundreds of flash units mounted on the dome fire one at a time in a precise sequence that repeats dozens of times per second. A high-speed camera captures a frame for every flash.
The result is complete information about how the subject reflects light from virtually every angle. Software can then compute exactly how the scene would look in almost any lighting environment, the researchers reported at the 2006 Eurographics Symposium on Rendering. This method is particularly promising for making films.
WHAT IS REALITY?
With all this manipulative power come questions of authenticity. The more that photographs can be computed or synthesized instead of simply snapped, the less confident a viewer is that a picture can be trusted.
"Certainly, all of us have a certain emotional attachment to things that are real, and we don't want to lose that," Nayar says. For example, to get a perfect family portrait, one might prefer that nobody had blinked. But is a bad shot better than a synthesized moment?
Whether film or digital, photographic images have always departed from reality to some degree. "And every generation, I believe, will redefine how much you can depart," Nayar says. "What was completely unacceptable 20 years ago has become more acceptable today."
Perhaps 20 years from now, when a photographer changes a picture's vantage point, people will still consider the scene to be real. But using a computer to change the clothes that a person in the image is wearing might be going too far, Nayar proposes.
Often, the goal of computational photography isn't to depart from reality but to create a closer facsimile of it. For example, someone looking at people standing in front of a sunset can see the faces clearly and can focus on any part of the scene. A normal photograph, with its dark silhouettes and fixed focus, offers a viewer less than reality.
So, a manipulated image can be "closer, by some subjective argument, to what the real world is for a person looking at it," Levoy says.
It's difficult to say which of the many technologies under the umbrella of computational photography will ever reach the consumer market. The room-size dome containing hundreds of flash units will almost certainly remain in the realm of specialized photographers and movie studios. Other techniques may be suitable for everyday use, but whether and when they reach the market will depend on the vagaries of business and marketing.
In whatever form computational photography becomes commonplace, people continue to adopt it over conventional image making will take pictures that capture more of what they actually see, and sometimes what never was at all.

Digital memory Card

MEMORY CARD MANAGEMENT

The memory card in our digital camera was full, and it was an emergency: snapping spring flowers. So we stopped at a camera store to buy a new memory card and, of course, we paid top dollar.
If you plan ahead and remember to bring them, you can buy 1-gigabyte flash memory cards for less than $10 apiece. A half-dozen would cost less than the small Photo Safe battery-powered hard drive we were given to try out from Digital Foci (DigitalFoci.com). The drive is pocket size and sells for $149 for the 40-gigabyte model. That would be $400 worth of memory cards, if you wanted that much memory.
On the face of it, it would seem like a much better deal to just buy a few extra memory cards and drop them in your pocket. But that was at first glance. After a moment's thought, we realized the tiny drive had some big advantages.
With a pocket full of flash cards, it's hard to remember which ones are full and which are not. It's also almost impossible to recall what's on each of the full ones. Then when you back up to your computer, you have to insert and empty each of the cards, one at a time.
None of this is terribly onerous, but it's so much simpler to just unload a card into the pocket hard drive when it fills up. There are seven slots in the side of the Photo Safe drive to accept almost any of the cards used by today's digital cameras. An adapter can be purchased to add three more. Come home or back to the office and just plug the hard drive into your computer to unload all the stored pictures in one sweep. Most computers will have a program that organizes them as they come in; if not, there are several such programs you can buy.
All in all, we felt it was worth the price difference to be able to empty just one or two cards and keep shooting. This would be for people who take a lot of shots: professionals or shutterbugs. If you take only a few shots whenever you use the camera, then it's not worth the extra cost for the Photo Safe hard drive.
A TRAVELER'S AID
Kensington.com has a nice external numerical keypad for laptops that also functions as a stand-alone calculator. That's handy if you do a lot of key punching. It's sold in a package with a wireless mouse for $60 at Kensington's Web site.
An external mouse is really nice to have for a laptop. We use one. If you like the idea of the keypad alone, we found it for around $30 on the Web.
DROPPED CALLS AND ALL THAT STUFF
Dropped calls and poor cell phone reception have become the bane of the land, and naturally enough, we have a fix. (On the other hand, maybe you really didn't want to talk to that person in the first place.)
We've found, through costly trial and error, that expensive cell phones experience fewer dropped calls and better overall reception. Not too surprising, we guess. The quality and reputation of your carrier counts too, and there's a Web site -- CellReception.com -- that offers personal opinions about that, depending on your location.
And then there's the antenna. You can kick up almost any cell phone's reception by adding an external antenna. Some of these plug into the cell phone itself; others attach to the roof of a car or have suction cups that stick to a wall or window. We did a few Web searches and found over a hundred offerings. There's no way we can test them all or even want to.
We looked at a two-unit combination to increase reception in a defined area, like inside a home or large office. This was the Spotwave Zen Z1900 signal booster. The unwieldy name is matched by an unwieldy price: $399 for improved call coverage estimated at up to 2,500 square feet, depending on the construction of the building. It doesn't boost the signal from all cell phone carriers, however. In our ZIP code, it doesn't work with Verizon or Sprint/Nextel, for example, two of the largest cell service providers. We checked many ZIP codes and couldn't use Verizon or Sprint/Nextel with any of them. What it basically works with is Cingular and T-Mobile.
Looking at the exceptions is instruction enough to carefully consider which signal booster would suit you best. You can go up to several thousand dollars for signal boosters for manufacturing plants, office buildings and large areas like casinos and sports arenas where there's lots of metal to interfere with reception. For a business, it's worth it. Individuals are better off with a high-quality phone.
INTERNUTS
Wize.com has a search routine that hunts for user comments for thousands of products. It searches more than 6,500 Web sites and ranks more than 30,000 products by user satisfaction and media buzz.
This is pretty much the way we all shop: We ask someone we know what they use and like, or we look up published opinions. But we've mentioned the problems with this before. Some songs of praise may be coming from people connected with the manufacturer or people who know a friend who works there. Some critical opinions may come from people at competing companies or someone bearing a grudge. It's always a judgment call, and common sense should be applied.