He would still get the awesome shots comments even with the pics he took with the basic mobile camera and shared them on social media.
A photographer visits a friend for dinner and shows her some of his photos, "Oh wow! these are spectacular" she gushes, "you must have an amazing camera". The photographer then replies "that dinner was great, you must have excellent pots".
This simply is no longer as true as it once was. And has been for at least a decade. More than three decades ago my forward thinking cobol programmer friend stated, drunk, that in the future someone called a wedding photographer will be able to walk into an event with a digital camera, wave it around, then select images for publication without having to sit people, set tripods, or wait for that one-off unique special moment. They were right.
Those are the side-by-side examples to show the raw file latitude.
That's why these Halide iPhone camera reviews are perennially worthless. They've never released a review that feels critically substantially. It's simply "here's the new iPhone hardware we're being forced to support" being copywritten as ad-material for their app.
Most of what professional cameras and lenses get you is the ability to get shots you otherwise couldn't - lower light, further distance, faster action, etc. Particular effect in DOF of flattening. Nothing that makes a good photo.
Without an eye, it's all wasted. But a good eye will see shots that can't be made; more of them if the camera at hand is limited.
Then camera technology really does make a massive difference, especially in low light, etc.
If the reason is fancy post-processing, then why can't Nikon have a tiny lens like the iPhone 13 and just add fancy post-processing to it?
Textures can also throw them off - "amplification" of the texture effect, almost.
They also suffer a bit zoomed in.
The post-processing fixes a lot of problems of older phone cameras but it has its limits.
On good camera hardware there's very little that all that post-processing would add outside of extreme-high-ISO-noise, IMO. Which - would it be nice? Sure. But you can find software and stack exposures manually and such for those situations too.
And a lot of the other smart stuff gets fooled too easily.
The DSLR images also retain much more detail in cropping.
Nikon just expects you to handle that post processing part that your iPhone is doing for you. In exchange you get way more control over the final image.
Both devices are aimed at different people. I myself have an iPhone 13 pro and a Nikon Z6ii. I tend to take snapshots with my iPhone because getting out the Nikon + playing around with sliders in Capture One is just too much hassle for a snapshot. Now would I take the iPhone and do a landscape photo where I hiked 6 hours to the photo location at 3 a.m. in the morning? Probably not. ;)
You'd be surprised how little megapixels are just fine for putting up on a wall or billboard. It's all about how far away the viewer is.
The mirrorless photos will look much better on a laptop or bigger screen but about the same on a phone.
Capturing non-raw in my experience (iPhone 6S, now have 13 mini) the jpegs are heavily de-noised, and really don't look that good 1:1 on a large monitor: on the iphone screen they look very good, as they're downsampled.
The article mentions the 'watercolor' effect since the iPhone 8, but I definitely had the issue with all the jpegs taken on my iPhone 6S since 2015...
DNGs however DO look very good, so clearly the sensor is capable of pretty nice images.
Try taking a portrait photo in iffy lighting, like at a concert, wedding, sporting event, etc. Something that really needs a fast and sharp lens.
iPhones apply filters to make the photos look more vivid and to make them "ready to share". If a professional camera would do that, it would not be professional.
This does annoy many people who then switch to Snow/BeautyCam to actually take their pictures since they want to look prettier.
See also https://www.dslrbodies.com/newsviews/news-archives/nikon-201... and https://www.dslrbodies.com/newsviews/news-archives/nikon-201...
Phone makers put the processng power at the post processing of photos, for people who never do their own post processing.
Both approaches are equally valid, they simply aim at different markets. And as far as softeare vs. physics goes, no amount of software and processing power can overcome the laws of optics.
The point of such a camera is not the in camera processing, which most pro users would not use on principle. Instead it's gaining a lot of control over setting up the shot properly with a lot of control over all the parameters that matter to achieve a look that matches what you want, intentionally. And then you finish the job in post processing. There's reason these things have so many buttons and dials: you need to use them to get the most out of the camera. And the point of owning such a camera is having that level of control. The flip side is that that makes you responsible for the intelligence. That kind of is the whole point. If that's not what you wanted, you bought the wrong camera.
The iphone has a very limited set of controls. You actually have very little control over it. Nice if that's what you want and the AI is awesome. But it's also a bit limiting if you want more. Of course it's very nice when that's the camera you have and you want to take a shot quickly by just pointing and clicking. Nothing wrong with that. I have a Pixel 6 and a Fuji X-T30. I use them both but not the same way.
Probably one of the biggest mistakes people make: not understanding at what aperture their lens is sharpest - typically stopping the lens down too much.
Other mistakes: not using a fast enough shutter speed to eliminate blurring from mirror slap (or using anti-mirror-slap features in their camera), not using a sufficiently sturdy tripod and mount, and so on.
Also, sometimes lenses just don't leave the factory in very good shape. If you struggle to get sharp results, it may be beneficial to send it in for service. The stuff sent around to reviewers has been obsessed over by the manufacturer, perfectly tweaked on a lens testing bench to get it as close to perfect as they can.
What people often mean by similar statements is they like default phone processing compared to 0 in the camera, and there is enough detail due to tons of light and due to landscapes being generally easiest scene to shoot.
As for why they are not comparable, also a very strange question from seemingly experienced photo shooter - compare software development department and budget in Apple vs Nikon, who is a tiny player we all love (have D750 since it came out and carried it everywhere up to 6000m), they use very specialized CPUs which are very good for 1 thing only (basic operations on raw sensor data and potential jpeg transformation), and various ML and stacking transformations aren't simply available there at performance required. The whole construction of camera and processing hardware isn't around snapping 30 pics and combining them together under 1s, pre-taking pictures before actually hitting shutter etc.
Apple ate their lunches and then some. While I'm an old-school photographer who thinks a great SLR camera is the photographic equivalent of driving a Porsche, I don't miss carrying pounds of gear around. (OTOH I HATE the Ux of iPhones for photography.) I digress. The camera biz is a classic biz school study in humans being human.
Now try to do sports photography with a mobile phone.
The first problem is lack of zoom compared to a 200mm lens. The second issue is trying to get the 1/8000 exposure you need on a rainy football field you need to stop the action for a good photo.
Likely a Z6 from the price point you mentioned. It has 24 MP (instead of iPhone 13's 12MP) and a much much higher dynamic range (more than 11 steps vs an iPhone's about 8). So unless you don't know what you're doing the Nikon is a way better camera (as it should be. It ways 5x as much with lens)
Autocorrect RAW in Adobe Photoshop looks good. And certainly on a 4k monitor the DSLR images reveal more detail.
My post here has me realizing I need to take iPhone and DSLR shots side-by-side in the same place with the same lighting and begin to compare them in-camera and in post-processing.
That's a challenge, I think. Take an R5 (only because I'm most familiar with it).
47 megapixels, 12 bpp 211.5MB, maximum shutter speed of 1/8000. In other words, you need to be able to pull data off the sensor (and to be clear, there's parallelization available, I just don't know what) at a global rate of 1.6TB/s.
(Which requires a lot of patience, but an old one would be cheaper than a new camera. Though renting is always an option.)
iPhone photos are excellent as long as viewers are only seeing them on iPhones.
"HDR stacking" apps are actually "SDR tone mapping" apps; they /start/ with an HDR result and make it not HDR anymore!
If a users needs are being met by an iPhone then they shouldn't worry about dslrs.
If a users needs aren't being met by the dslr gotta wonder is it the technology or the skill of the user?
But the idea of enhancing filters or social media features is completely alien to them.
Edit: Apologies for commenting on downvotes, but I'd be genuinely curious to see some objective evidence that the optics of a typical DSLR lens have a superior design. Of course it is true that larger lenses for larger sensors tend to be superior because they do not need to resolve as many lines per mm and they do not need to be machined as precisely (all else being equal). But does anyone know of any actual lab tests that make relevant comparisons? I am a bit tired of people just assuming that DLSR lenses are higher quality than smartphone lenses, even though the cost of modern smartphones, and the enormous disparity in the number of units sold, makes it far from obvious that this should be the case.
This, 100%.
The massive difference in image quality is when shooting in RAW. That’s when you actually get the 48MP & the images are fantastic.
But that’s not the default. The default is 12MP.
That’s why reviewers are so torn on this camera system. If the default was a 48MP picture/quality, everyone would be praising it. But when the default is 12MP, it’s par for the course.
If you use photos as a way to preserve memories, then who knows what these photos are going to be displayed on in the future?
Maybe in the (near) future, we start adopting VR headsets, and then the 12MP vs 48MP difference is going to matter a lot.
On the low-tech end of the spectrum, maybe you want a larger poster printed out. If you need to crop some part of the image at all, the difference in resolution is going to be very noticeable.
The 48MP camera has the same color spatial resolution as a 12MP camera, but it has 48MP of monochromatic spatial resolution. Humans aren't as sensitive to color resolution as they are to spatial resolution in general. This is why the "2x" mode on the 14 Pro look great compared to what you might expect based on your comment. The 2x crop only has "3MP" of color resolution, but the 12MP of spatial resolution from the 2x crop makes it perfectly usable.
For your specific use case, the ultrawide camera may work fine as a "macro" lens, depending on the size of the pixels you're trying to capture. A real macro lens on an interchangeable lens camera would obviously do better.
This wasn't meant to be a nice dataset, it was just something I quickly tried to document my observation to a single other human. I would have been more careful if I expected to share it more widely.
Even if the lens was operating at the limit of physics, you'd get a 2 micron first ring of the airy disk at 500nm, which is bigger than the size of the pixels. It can help a bit to have pixels smaller than the smallest detail the lens can resolve, but at the same time the lens isn't operating at the physical limit, so there is probably only a small increase in spatial resolution.
The main advantage to me is that the result is much less affected by Apple's always-on edge-sharpening processing. The effective "resolution" of the processing artifacts is higher.
In previous iPhones if you take a photo of a bunch of leaves on a tree it's almost like it tries to draw a little sharpened outline around each one, which looks like a watercolor mess if you zoom in at all and doesn't capture what your eye sees.
With the 48mp compressed shots I find landscapes and trees look much more natural and in general you can crop and zoom into photos further before the detail is lost in the processing mess.
Unlikely. You mean mb not kb.
The 48MP sensor still has 48MP of monochromatic resolution, but it only has 12MP of effective color resolution. You'll still see fine details, but the colors are not as high resolution as the details. This is rarely a problem, given the way the human eye processes color.
I don't think the artifacts are directly from aliasing but rather an artifact of software interpolation.
It anyone knows better please correct me.
[1] Other camera apps are available, etc.
But it does look like it was shot through a window without a polarizer filter.
I do agree the other pics are great though (and own an iphone (because the company paid for any phone and I wanted the best camera...)). I'm not 100% happy with the "iphone look", too much fine contrast for my taste, but it's of course great cameras.
Fortunately, most of the venues I go to for shows generally have a "No detachable lens cameras" rule, which means my Fuji X100 is allowed. Unfortunately, security at the venues often ignores the policy and I don't want to be That Guy holding up the line arguing with them about it.
(I'm also not one of those people that is taking pictures [and I never record video] the whole show, I just want a handful of high quality shots to help me remember the show)
https://i.imgur.com/gm8jUkj.jpg https://i.imgur.com/hrBL18U.jpg https://i.imgur.com/zLPFgPW.jpg https://i.imgur.com/1kPzHZ3.jpg https://i.imgur.com/97R9asy.jpg https://i.imgur.com/3ITpUuT.jpg https://i.imgur.com/1MClydK.jpg
(No editing by me, though I did pick preferentially some photos that weren’t blurred majorly. The largest blurred object appears to be the DJ (BT) who was moving a decent bit.)
1) Historically, some bands have been concerned about their image and felt that professional-looking photos that painted them in a bad light, whatever that meant in reality, would be more damaging than amateur photos. I don't hear this as much today, but 15 years ago it was frequently given.
2) Concerts with a lot of standing room near the stage already get quite crowded. Someone showing up with a bulky dslr (or even prosumer grade mirrorless) body and a 200mm lens is going to take up quite a bit of room. Prior to the advent of half the damn crowd keeping their phones in the air recording the show for the entirety of it I would also say it obscures vision and annoys people, but now it's really not any worse than that
3) They don't want someone to try and hold them liable if something goes wrong and some expensive camera body or glass gets broken.
On the times I've been able to bring my full camera gear in without a press pass, I stick to as small of a lens as I can and avoid being near the front of the crowd. Thankfully, even quite a ways back from the front of the crowd, a 50mm prime lens will still take some fantastic photos on a real camera vs. what you get with a smartphone. I understand why the rules are in place, though, and I don't really have a problem with them in general.
Or maybe they just annoy other patrons.
It’s great you found that work around with Fuji x100. I just think this is a valid though minor complaint since most cameras suffer in the same context, and I wish people would stop taking photos at concerts hahaha.
Google's computational photography is years ahead. The latest Pixel has better sensors too.
Maybe I am very biased by almost a decade of full frame shooting basically everything, but I like photo representing what I actually see with my own eyes at that moment.
When talking about Pixels, when I saw some non-ideal light samples from latest one, it was pretty clear neither Apple nor ie Samsung (which I own and love, S22 ultra) are in same league in many aspects of photography. But Pixel 6 had some pretty annoying issues from user reports. On the other side it costed (and v7 still does) significantly less from day 1.
Having said that, I would love to see a large sensor camera with gcam esque chops. It’s not too hard to run into the limitations of the small sensor.
I think google computational might be better in some edge cases, but it comes at the cost that sometimes it over-processes things: some textures look painted on my pixel photos, some night shoot feel "cool" but do not capture what I'm looking with my eyes at all, no matter my settings. And with portraits I have this feeling sometimes that I capture a nice image only to see it somehow ruined when the processing ends, with the face that becomes too "beautified" (smooth skin, etc).
I feel the iPhone computation is overall better for the most common lightning situations and maybe less aggressive, but I would need to test it for way more time to form a more complete opinion.
Other comparisons: https://www.phonearena.com/news/Pixel-7-Pro-vs-iPhone-14-Pro...
https://www.digitaltrends.com/mobile/iphone-14-pro-vs-google...
later, when sharing the photos, i realized we could distinctly see my partner and I in the reflection of the cat's eye. the CSI enhance memes are real :-)
i feel there is more to this story
Okay...but how does it perform taking pictures of my toy dog in suburb-town USA?
They just have lots of fancy processing that makes quick snaps look better than the same amount of effort on a camera.
Too bad we’ll never know.
I haven’t had enough wifi to backup to iCloud and free up space to work in raw/48MP, but you can see the sorts of shots here with the stock app and 12MP.
http://instagram.com/isaacforman/
I am carrying a GoPro, an Osmo Pocket, an Insta360 Go 2, two drones and the two phones - the 14 is the one I used as a priority because the quality and options were so good.
But optics can’t beat physics. Better cameras need more space for lenses and sensors. They could make the whole phone thicker, but haven’t done that yet.
The trajectory seems unsustainable. We’ll see what happens I guess.
For users less interested in photography, the regular iPhones have a more subtle camera array.
The iPhone cameras are superb I think, but the "Apple Image Processing" renders stock camera photos useless for me. The watercolor smear and color smoothness that the article talks about. I like noise, don't smooth away the reality of life.
ProRAW is confusing because it is not RAW at all, instead it's an uncompressed image that has had a milder "Apple Image Processing" applied.
I've tested the iPhone 14 Pro, and the ProRAW files are too processed for me. They are not "reality". This is a philosophical concept. Do I want to capture what I see, or a smoothed watercolor ideal of what Apple think I should see?
I want a 48MP genuine RAW file that I can post-process in Lightroom or Photoshop. Here's looking forward to the iPhone 15 Pro!
It is really a shame that most new consumer tech is locked behind the doors of large corporations these days, that will keep the tech from reaching its true potential in a myriad of products.
Optics is weird. Either a device becomes a thing and then it's immediately a huge product, or the availability is straight up zero.
Pretty disappointing that a US$1000+ flagship device still can't batch capture at high resolution. Even entry-level DSLRs/mirrorless cameras can do 15+ fps in RAW mode.
I recently gave up my mirrorless in favor of my iPhone because the latter was just so much more convenient and largely good enough. I wonder if it is physically possible, however, for these smaller lens and sensor packages to ever get to the point of eliminating that phone camera smudge?
You can definitely take nice photos with it given the right variables and/or some creativity (as you could with a point-and-shoot a decade ago)... but don't be taken in by their marketing if photography is a goal for you.
o < aperture of a given diameter
/\
/ \
---- < small sensor (less area, more light per unit area)
/ \
/ \
-------- < large sensor (more area, less light per unit area)
If you compare typical shooting apertures for DLSRs and camera phones, they're not radically different. Say you are shooting a 50mm lens at f8 on a DSLR. That's an aperture of 6.25mm. A typical smartphone camera will have an aperture of around 3-4mm. In this scenario, then, the DLSR is getting about 3 times more light (or ~1.5 stops).Of course you can use much wider apertures on DSLRs, but their use is more limited given the shallow depth of field that results. If you're shooting e.g. landscapes, then you're probably not going to use apertures much wider than f8 anyway.
However, sensors get noise from different sources: and while you're right to point out that you might be up against photon shot noise, read noise goes down with pixel area: so, as long as pixel area scales with sensor area, and that scaling is performed by uniformly scaling the pixel, the larger sensor is intrinsically "a little bit better". Quoting shamelessly again from wikipedia [2]
> The read noise is the total of all the electronic noises in the conversion chain for the pixels in the sensor array. To compare it with photon noise, it must be referred back to its equivalent in photoelectrons, which requires the division of the noise measured in volts by the conversion gain of the pixel. This is given, for an active pixel sensor, by the voltage at the input (gate) of the read transistor divided by the charge which generates that voltage, CG = V_{rt}/Q_{rt}. This is the inverse of the capacitance of the read transistor gate (and the attached floating diffusion) since capacitance C = Q/V. Thus CG = 1/C_{rt}.
As capacitance is proportional to area, pixel area matters here – read noise is proportional to it linearly. In low-light conditions, read noise dominated most cellphone sensors (mostly for the above).
[1] https://en.wikipedia.org/wiki/Etendue [2] https://en.wikipedia.org/wiki/Image_sensor_format#Read_noise
Are smaller sensors also faster to read, given the lower capacitance? I wonder if that might give them an advantage when it comes to stacking and averaging images to reduce noise.
Maybe the iPhone would do better in studio conditions, I don’t know.
But it’s a very tough problem.
The problem with the photo in the article is the structure of the background is quite apparent and all the details in the background have been multiplied into hexagons which is very distracting.
Directly comparable macro photograph of a moth. https://www.slrphotographyguide.com/images/butterflymacro.jp...
For example: A nice side benefit of a larger sensor is a bit more depth of field. At a 13mm full-frame equivalent, you really can’t expect too much subject separation, but this shot shows some nice blurry figures in the background.
For as long as I can remember, depth of field referred to the range of distance that is in focus. So more depth of field would mean more things in focus and less subject separation. And "distance to subject" and "field of view" equal, a larger sensor results in less depth of field.
But in the article it is clearly the opposite. This isn't the only place I've noticed the change in meaning either.
“ A nice side benefit of a larger sensor is a bit more depth of field effect”.
Eg referring to the visual effect of shallow DoF, not the DoF itself. Because the following sentence is unambiguous about his intent, I’m inclined to not be overly pedantic here and let us slide.
Maybe?
edit: it's fixed now
- Was it editorialized by the submitter?
- Was it changed by dang to de-clickbait it? (probably not in this case)
- Did one of HN's title filters remove some words?
I was sick of the smudgy look that happened often on the iPhone when the lighting wasn't perfect, and also there is a unique "look" that the Fuji mirrorless cameras spit out due to their x-trans sensor[1]. In my short 2 weeks with the camera I've had a ton of fun and gotten some great shots.
While no doubt the 14 pro is amazing, your statement isn't true.
Welcome to the hobby! The X-T20 is a great camera.
> also there is a unique "look" that the Fuji mirrorless cameras spit out due to their x-trans sensor[1]
The performance of the Fuji sensors aren't the main reason behind the processing, for sensor type comparison, there's a great article here:
https://medium.com/@nevermindhim/x-trans-vs-bayer-fantastic-...
Fuji cameras have built in post-processing that allows it to render images closer to film stock. Fuji has spent a lot of time refining their post-processing to match how it would like across their own film stock and really the only manufacturer that achieves such a great out-of-camera processed images.
If you are working and shooting in RAW, you'll find the output from Fuji when processing the RAWs yourself are not much to look at when you look at a RAW file in RAW processing tools like DarkTable etc. They're fairly similar really to any other camera mirrorless or DSLR. You'd really have to be pixel peeping most of the time to really see a noticeable difference (assuming lens etc. is very similar). When working in RAW, you can often get similar processed colours and images out of most cameras so really this shouldn't be a limting factor. From my Android phone, I can save RAWs and generally achieve similar colours and overall appearance etc. Although, due to the sensor size, it will be notably lower quality if you start pixel peeping and generally the colour depth is usually less to work with on mobile sensors.
Edit: Had the wrong link for the article.
I'd say aesthetically, it's very difficult to get decent background seperation on a phone. The practical reasons for using a camera body (I kind of consider phones to be real cameras) is to deliver quality images and aesthetics that are not achievable trivially on the phone. There's a point where trying to use a phone the same way as a camera body actually becomes really awkward from both a user interface perspective and very difficult based on capabilities.
It's actually a bit infuriating how camera bodies have not been modernizing very much. You won't find many camera bodies having a built in GPS directly, no camera security options (anti-theft, preventing photo access, automatic cloud backup), I haven't found a single camera manufacturer phone app (tethers with camera) that actually works decently with RAW workflows. There is a demand for post-processing in cameras too, not for every photographer, but, fuji users in particular tend to use that camera because of its post-processing capabilities.
GPS is actually a pain for professionals that work in teams too even when you aren't using GPS for positioning. If you're covering an event and don't have GPS, your camera bodies won't usually have decently synchronised clocks and if you're trying to get photos from multiple photographers and trying to make a second by second story, the metadata in the photos will lead to a very jumbled mess quickly. Meanwhile, most phones are accurately keeping the time so perfectly, that if you import images from them, they're typically very well ordered.
I wish I could just have a simple dumb phone with a calendar, GPS, and texting, and it doesn't cost 1400 bucks.