It’s not though, read my comments again. Slowly if you’re struggling.
- 0 Posts
- 36 Comments
You lack perception because your level of understanding is childish. I’d put you at 7/8. It’s really quite illuminating how thick some people can be.
The College of Optometrists came out and said it was ambigious. It’s the point of the image dumbass. It’s not about good spatial awareness. All you’re demonstrating is lack of basic perception.
lol ok Neo.
Unfortunately you’re unable to see the (objective) pixel representation :(
No, it does. That’s the point lol. Go read the Wikipedia thicko
Brown with a gold tint yes.
And blue so light it could be mistaken for white.
That isn’t what’s happening it’s a low res overexposed photo that lacks visual cues not real life.
Well you would select the brightest bit to get an idea of the bit that was least impacted by the shadow.
But yes still closer to white and gold than (dark) blue and black
No it’s a very light blue that looks like white+shadow.
The gold is a browny gold but the options were ‘white and gold’ or ‘blue and black’
https://lemmy.dbzer0.com/pictrs/image/1af486db-deb1-44da-8d48-6ad5b5833713.webp
I see these exact pixels for the whole dress. So no black, and no blue like the original physical dress.
explaining how their perception is correct and how blue-black people don’t understand the image.
Well the ones that do understand the image by definition won’t need it explained. There’s no ‘correct’, if we’re talking pixels/digital representation, it’s white-gold (or light-blue and brown if we’re being pedantic), if we’re talking about what the physical dress is, it’s blue and black.
If it were a white and gold dress and the light was reversed to shadow it’d likely be the other way about; some people would interpret it as the pixels displayed (blue and black), and others would subconsciously revert it to white and gold.
No because you can’t see the floor underneath.
Looks like a fence with holes in it to the left, can see the sun through the holes in the distance.
No because it’s your subconscious, otherwise you’d have no problem understanding why it’s was ambigious. (Same applies for elevated beings - they can grasp differences in human colour perception).
And either way, even if your assumptions were true you still don’t know the angle of the sun, potential coverings, etc. You can’t predict the shade without that info so the logical choice would be to use the colours the pixels display.
Because shade works in 3D and it’s not clear how far away the background is from this picture. But yes, ‘dress shaped and size amounts of shade’ exist; trees, could be on a shaded balcony, etc.
The brain doesn’t just read raw brightness; it interprets that brightness in relation to what it thinks is going on in the scene.
So when someone sees the dress as white and gold, they’re usually assuming the scene is lit by cool, natural light — like sunlight or shade. That makes the brain treat the lighter areas as a white-ish or light blue material under shadow. The darker areas (what you see as black) become gold or brown, because the brain thinks it’s seeing lighter fabric catching less light.
You, on the other hand, are likely interpreting the lighting as warm and direct — maybe indoor, overexposed lighting. So your brain treats the pale pixels not as light-colored fabric, but as light reflecting off a darker blue surface. The same with the black: it’s being “lightened” by the glare which changes the pixel representation to gold, but you interpret it as black under strong light, not gold.
Looks like a sunny background and that the dress is in the shade
The claim mixes up how perception works and what people actually mean when they talk about top-down processing. White and gold viewers aren’t saying the pixels are literally white and gold—they’re saying the colors they perceive match most closely with that label, especially when those were the only options given. Many of them describe seeing pale blue and brown, which are the actual pixel values. That’s not bottom-up processing in the strict sense, because even that perception is shaped by how the brain interprets the image based on assumed lighting. You don’t just see wavelengths—you see surfaces under conditions your brain is constantly estimating. The dress image is ambiguous, so different people lock into different lighting models early in the process, and that influences what the colors look like. The snake example doesn’t hold up either. If the lighting changes and your perception doesn’t adjust, that’s when you’re more likely to get the snake’s color wrong. Contextual correction helps you survive, it doesn’t kill you. As for the brain scan data, higher activity in certain areas means more cognitive involvement, not necessarily error. There’s no evidence those areas were just shutting things down. The image is unstable, people resolve it differently, and that difference shows up in brain activity.
What science lol.
The pixels are light blue and gold.
The dress itself is dark blue and black.
But the pixels side with the white and gold team. They are seeing the pixels as they appear. If you see blue and black your subconscious is over-riding the objective reality of the pixels (and guessing correctly what colours the original dress is).
I love how people keep saying this without actually picking the colours. There’s no black pixels on there at all.
Yikes, we’re gonna need a bigger spectrum.
You lack nuance. I started winding you up because ‘you’re wrong and I see good’ is a one dimensional smooth brained perspective.