I have a D I shoot Raw 14 bit files. Depending on content they are between 18 and 21 meg. Into 16 bit Tiffs they come out as 96 meg. If you are looking at a certain camera, download the manual from their www site. It will list file sizes. It really can vary depending on who makes the camera. I assume you are shopping for memory cards or storage devices.
As everyone said, it varies a bit according to light, subject and ISO, but for my The data is losslessly compressed, hence the variance. If no compression were used for a 20 MP camera raw file that has bit analog-to-digital conversion ADC of each pixel location voltage then the file size would be 40 MB. The reason is that two bytes 16 bits would be needed in memory in order to respect byte boundaries. There is a lot of useless data and redundant data in those two bytes in a typical image.
First, two bits are not used at all. Next, many of the voltages in whatever units are used are small while others are large. If there is no voltage, the number can be written as 1 instead of in binary. In other words, leading edge zeros are not needed. Similarly, three units is 11, and so on.
The largest number before leading ed leading edge zeros are deleted is , so there is not much to be compressed there. There are more opportunities for compression as well. For example, one can establish a par number, such as used in golf score card or current rankings after each round.
In the camera, the par will increase if you crank up the ISO. All of these tricks are totally lossless. They save memory space and reduce transmission time.
Once the data is in the computer and ready for alteration the files are uncompressed. Want to test my reasoning? I just tried it but it was admittedly not a very scientific test. Then I left settings the same and took a picture of a dark object with relatively little color variation. With my 18 MP D, the first raw file was I had to expose the second picture even more because the light was too dim.
Otherwise there might have been even a smaller raw file size with more useless leading edge zeroes out of the DAC that were eliminated by compression.
With no compression, the D raw file size would be about 36 MB. There are many opportunities for compression, such as encoding changes in value rather than absolute levels or using codes that are only as long as necessary to express the values. Want to test my reasoning although not very scientifically? John, although the principles of compression you outline above are correct, many of them do not apply to loss-less compression of raw files as they are a unique type of data.
For instance, applying a straight WinZip type of compression as per the above link to most raw image files does very little in reducing file sizes as it ignores that the raw data is actually interleaved values from individual data channels, with two green, one red and one blue channel, and it also ignores that raw data has unique variations to the magnitude of the random component. Raw data compression does encode the relative change in readings but in order to be effective it needs to be applied per channel so in raw encoding it encodes the change in value referenced to the the immediately adjacent same channel photosite to the left or directly above for the case of a new row for the sensor in landscape orientation.
Thus, the whole image would theoretically contain nothing but a series of zeros for an image of a fixed colour other than for the first four readings at the start of the first row and columns for each of the four channels. This then works in conjunction with Huffman encoding, which assigns particular shorter binary codes to represent the most frequently occurring values and longer codes to less frequent values.
Thus, for a Huffman scheme that say assigned a two bit code to represent zero and longer codes to represent the less frequently occurring larger delta values with some codes possibly being longer than 14 bits , the above constant colour raw image could be compressed to about 4.
Unfortunately, the realities of raw sensor output interfere with this ideal encoding, even for black frames - the inside of lens cap images: sensors have noise so that the delta values are almost never zero and that noise is larger the brighter the output, which is the effect of statistical variation of the arrival of photons.
This means that even for the lowest ISO senstivity for your Canon D, the average delta value for a constant black level is at least an average of about four levels, and for a constant very bright level has an average delta value of about 80 levels, expressed in therms of a bit range.
Thus a zero delta is rarely encoded. In addition, different images will have different histograms meaning that the frequencies of occurrence of various delta values will vary depending on the image. Also, the calculation of new Huffman frequency encoding tables requires time and computer resources due to requiring two passes through the data to be encoded.
Thus, camera manufacturers calculate these tables once for an average image and those are the values that are applied in compression. Typically, this results in an average of about eight-bit codes being used for bit raw bit depths for very dark images, which explains the file sizes that you saw and why they changed by such a small percentage in spite of there being much less colour variation in the second case.
Though i don't find RAW to be as all awesome as everyone else seems to think, the white balance is a stack easier, but that's about it. I still way prefer levels to the acr curve. Approve the Cookies This website uses cookies to improve your user experience.
By using this site, you agree to our use of cookies and our Privacy Policy. Register to forums Log in. Jun 04, 1. LIKES 0. Jun 04, 2. Jun 04, 3. Jun 04, 4. I didn't know that. Jun 04, as a reply to pagnamenta's post 5.
Jun 04, 6. Jun 04, 7. Jun 05, 8. Orph Orph Mostly Lurking. Jun 05, 9. Jun 05, Jun 06, If you need to save memory you may need to compress the file at some point. That's like if you make a music file on some kind of music software, the file's going to be huge, but with much detail. Once you compress it you will most likely lose some of the vibrant detail.
Changing it is to another format is like putting algorithms together, since it's not compressing the file, you're making more code. I always use MPEG streamclip, it's just an application with many formats meant for converting things directly so you don't have to go through all of that trouble. Sign up to join this community. The best answers are voted up and rise to the top.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Asked 6 years, 11 months ago. Active 6 years, 9 months ago. Viewed 7k times. Improve this question. Add a comment. Active Oldest Votes. Improve this answer. That's true, but I don't think it's the case this time, because we are talking only about the image size.
The point is that while the option will essentially double the file size of a flat image, it's not something you can dispense with simply because you don't need to interop with older versions of Ps - there are other, more important reaasons to keep the option checked.
0コメント