• CosmicTurtle@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 year ago

    You’re missing a huge part of the reason why the term ‘tebibytes’ even exists.

    Back in the 90s, when USB sticks were just coming out, a megabyte was still 1024 kilobytes. Companies saw the market get saturated with drives but they were still expensive and we hadn’t fully figured out how to miniaturize them.

    So some CEO got the bright idea of changing the definition of a “megabyte” to mean 1000. That way they could say that their drive had more megabytes than their competitors. “It’s just 24 kilobytes. Who’s going to notice?”

    Nerds.

    They stormed various boards to complain but because the average user didn’t care, sales went through the roof and soon the entire storage industry changed. Shortly after that, they started cutting costs to actually make smaller sized drives but calling them by their original size, ie. 64MB* (64 MB is 64000).

    The people who actually cared had to invent the term “mebibyte” purely because of some CEO wanting to make money. And today we have a standard that only serves to confuse people who actually care that their 2TB is actually 2048 GiB or 1.8 TiB.

    • pHr34kY@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      1 year ago

      Dude, a “1.44MB” floppy disk was 1.38MiB once formatted (1,474,560 B raw). It’s been going on for eternity.

      It’s inconsistent across time though. 700MB on a CD-R was MiB, but a 4.7GB DVD was not.

      RAM has always, without exception, been reported in 1024 B per KB. Inversely, network bandwidth has been 1000 B per KB for every application since the dialup days (and prior).

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        One thing to point out, The floppy thing isn’t due to formatting, the units themselves were screwed up: It’s not 1.44 million bytes or 1.44 MiB regardless of formatting - they are 1440 kiB! (Which produces the raw size you gave) which is about 1.406 MiB unformatted.

        The reason is because they were doubled from 720 kiB disks*, and the largest standard 5¼ inch disks (“1.2 MB”) were doubled from 600 kiB*. I guess it seemed easier or less confusing to the users then double 600k becoming 1.17M.

        (* Those smaller sizes were themselves already doubled from earlier sizes. The “1.44 MB” ones are “Double sided double density”)

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      1 year ago

      That’s just wrong. “Kilo” is ancient Greek for “thousand”. It always meant 1000. Because bytes are grouped on powers of two and because of the pure coincidence that 10^3 (1000) is almost the same size as 2^10 (1024) people colloquially said kilobyte when they meant 1024 bytes, but that was always wrong.

      Update: To make it even clearer. Try to think what historical would have happened if instead of binary, most computers would use ternary. Nobody would even think about reusing kilo for 3^6 (=729) or 3^7 (=2187) because they are not even close.

      Resuing well established prefixes like kilo was always a stupid idea.

    • gornius@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Or - you know - for consistency? In physics kilo, mega etc. are always 10^(3n), but then for some bizarre reason, unit of information uses the same prefixes, but as 2^(10n).