• 0 Posts
  • 36 Comments
Joined 6 months ago
cake
Cake day: January 18th, 2025

help-circle











  • explaining how their perception is correct and how blue-black people don’t understand the image.

    Well the ones that do understand the image by definition won’t need it explained. There’s no ‘correct’, if we’re talking pixels/digital representation, it’s white-gold (or light-blue and brown if we’re being pedantic), if we’re talking about what the physical dress is, it’s blue and black.

    If it were a white and gold dress and the light was reversed to shadow it’d likely be the other way about; some people would interpret it as the pixels displayed (blue and black), and others would subconsciously revert it to white and gold.



  • No because it’s your subconscious, otherwise you’d have no problem understanding why it’s was ambigious. (Same applies for elevated beings - they can grasp differences in human colour perception).

    And either way, even if your assumptions were true you still don’t know the angle of the sun, potential coverings, etc. You can’t predict the shade without that info so the logical choice would be to use the colours the pixels display.



  • The brain doesn’t just read raw brightness; it interprets that brightness in relation to what it thinks is going on in the scene.

    So when someone sees the dress as white and gold, they’re usually assuming the scene is lit by cool, natural light — like sunlight or shade. That makes the brain treat the lighter areas as a white-ish or light blue material under shadow. The darker areas (what you see as black) become gold or brown, because the brain thinks it’s seeing lighter fabric catching less light.

    You, on the other hand, are likely interpreting the lighting as warm and direct — maybe indoor, overexposed lighting. So your brain treats the pale pixels not as light-colored fabric, but as light reflecting off a darker blue surface. The same with the black: it’s being “lightened” by the glare which changes the pixel representation to gold, but you interpret it as black under strong light, not gold.



  • The claim mixes up how perception works and what people actually mean when they talk about top-down processing. White and gold viewers aren’t saying the pixels are literally white and gold—they’re saying the colors they perceive match most closely with that label, especially when those were the only options given. Many of them describe seeing pale blue and brown, which are the actual pixel values. That’s not bottom-up processing in the strict sense, because even that perception is shaped by how the brain interprets the image based on assumed lighting. You don’t just see wavelengths—you see surfaces under conditions your brain is constantly estimating. The dress image is ambiguous, so different people lock into different lighting models early in the process, and that influences what the colors look like. The snake example doesn’t hold up either. If the lighting changes and your perception doesn’t adjust, that’s when you’re more likely to get the snake’s color wrong. Contextual correction helps you survive, it doesn’t kill you. As for the brain scan data, higher activity in certain areas means more cognitive involvement, not necessarily error. There’s no evidence those areas were just shutting things down. The image is unstable, people resolve it differently, and that difference shows up in brain activity.


  • What science lol.

    The pixels are light blue and gold.

    The dress itself is dark blue and black.

    But the pixels side with the white and gold team. They are seeing the pixels as they appear. If you see blue and black your subconscious is over-riding the objective reality of the pixels (and guessing correctly what colours the original dress is).