There are 3 types of files. Renamed txt, renamed zip, and exe
I’d argue with this, but it seems like image and video file extensions have become a lawless zone with no rules so I don’t even think they count.
Looking at you, .webp
Video files are just a bunch of zip files in a trenchcoat.
Back in the day, when bandwidth was precious and porn sites would parcel a video into 10 second extracts, one per page, you could zip a bunch of these mpeg files together into an uncompressed zip, then rename it .mpeg and read it in VLC as a single video. Amazing stuff.
What’s it called when you logically expect something to work, but are totally surprised that it actually does?
Sounds an awful lot like a normal day at work as a dev.
Don’t forget renamed tar.
It’s a folder that you put files into, but acts as a file itself. Not at all like zip.
Tar.gz is pretty much like zip. Technically tar mimics a file system more closely but like who makes use of that?
Tar mimics a filesystem more closely? Tf???
TAR stands for Tape ARchive. It’s called that because it’s designed to be written to (and read from) non-seekable magnetic tape, meaning it’s written linearly. The metadata for each file (name, mtime etc.) immediately precedes its contents. There is no global table of contents like you’d find on an actual filesystem. In fact, most implementations of tar don’t even put the separate files on gzip boundaries meaning you can’t decompress any given file without decompressing all of the files before it. With a tape backup system, you don’t care, but with a filesystem you absolutely do.
PKZIP mimics the traditional filesystem structure much more closely. The table of contents is at the end instead of the beginning, which is a bit strange as filesystems go, but it is a table of contents consisting of a list of filenames and offsets into the file where they can be found. Each file in a zip archive is compressed separately, meaning you can pull out any given file from a ZIP archive without any prior state, and you can even use different compression algorithms on a per-file basis (few programs make use of this). For obvious reasons, the ZIP format prioritizes storage space over modification speed (the table of contents is a single centralized list and files must be contiguous), meaning if you tried to use it as a filesystem it would utterly suck – but you can very readily find software that will let you read, edit, and delete files in-place as though it were a folder without rewriting the entire archive. That’s not really possible with a .tar file.
You could make the argument that tar is able to more closely mimic a POSIX filesystem since it captures the UNIX permission bits and ZIP doesn’t (ustar was designed for UNIX and pkzip was designed for DOS) but that’s not a great metric.
Ah, good ol’ Microsoft Office. Taken advantage of their documents being a renamed .zip format to send forbidden attachments to myself via email lol
On the flip side, there’s stuff like the Audacity app, that saves each audio project as an SQLite database 😳
Also .jar files. And good ol’ winamp skins. And CBZ comics. And EPUB books. And Mozilla extensions. And APK apps. And…
cbz is literally just a renamed zip
an SQLite database
Genius! Why bother importing and exporting
Minetest (an open-source Minecraft-like game) uses SQLite to save worlds.
Mineclone2 is an absolute masterpiece of a game for Minetest IMO
I prefer games that embrace the difference from Minecraft instead of trying to emulate it. My favorite is MeseCraft.
So does Scrap Mechanic (sandbox game that’s basically Space Engineers on the ground – or, more loosely, Minecraft but with physics and you can build cars) also uses sqlite to save worlds. It also uses uncompressed JSON files to store user creations.
It used to use project folders, but due to confusion/user error was changed in 3.0.
deleted by creator
that saves each audio project as an SQLite database 😳
Is this a problem? I thought this would be a normal use case for SQLite.
doesn’t sqlite explicitly encourage this? I recall claims about storing blobs in a sqlite db having better performance than trying to do your own file operations
Thanks for the hint. I had to look that up. (The linked page is worth a read and has lots of details and caveats.)
The scope is narrow, and well documented. Be very wary of over generalizing.
The measurements in this article were made during the week of 2017-06-05 using a version of SQLite in between 3.19.2 and 3.20.0. You may expect future versions of SQLite to perform even better.
https://www.sqlite.org/fasterthanfs.html
SQLite reads and writes small blobs (for example, thumbnail images) 35% faster¹ than the same blobs can be read from or written to individual files on disk using fread() or fwrite().
Furthermore, a single SQLite database holding 10-kilobyte blobs uses about 20% less disk space than storing the blobs in individual files.)
Edit 5: consolidated my edits.
wait what
Civilisation (forget which) runs on an SQLite DB. I was rather surprised when I discovered this, back then.
Also renamed xml, renamed json and renamed sqlite.
Those sound fancy, I just use renamed txt files.
.ini is that you?
It’s everything
surprised pikachu face
yaml
No, I am .nfo
Amateurs.
I have evolved from using file extensions, and instead, don’t use any extension!
I don’t even use a file system on my storage drives. I just write the file contents raw and try to memorize where.
Sounds tedious, I’ve just been keeping everything in memory so I don’t have to worry about where it is.
Sounds inefficient. You can only store 8 gigs and goes away when you shut off your computer? I just put it on punch cards and feed it into my machine.
So archaic. Real men just flap a butterfly’s wings so that they deflect in cosmic rays in such a way that they flip the desired bits in RAM.
As yes good old M-x-Butterfly.
Linux mostly doesn’t use file extensions… It relies on “magic bytes” in the file.
Same with the web in general - it relies purely on MIME type (e.g.
text/html
for HTML files) and doesn’t care about extensions at all.“Magic bytes”? We just called them headers, back in my day (even if sometimes they are at the end of the file)
The library that handles it is literally called “libmagic”. I’d guess the phrase “magic bytes” comes from the programming concept of a magic number?
I did not know about that one! It makes sense though, because a lot of headers would start with, well yeah, “magic numbers”. Makes sense.
You can just go in Folder View and uncheck “hide known file extensions” to fix that! ;)
SQLite explicitly encourages using it as an on-disk binary format. The format is well-documented and well-supported, backwards compatible (there’s been no major version changes since 2004), and the developers have promised to support it at least until the year 2050. It has quick seek times if your data is properly indexed, the SQLite library is distributed as a single C file that you can embed directly into your app, and it’s probably the most tested library in the world, with something like 500x more test code than library code.
Unless you’re a developer that really understands the intricacies of designing a binary data storage format, it’s usually far better to just use SQLite.
Use binwalk on those
Most of Adobe’s formats are just gzipped XML
Microsoft office also is xml
Nothing wrong with that… Most people don’t need to reinvent the wheel, and choosing a filename extension meaningful to the particular use case is better then leaving it as
.zip
or.db
or whatever.Totally depends on what the use case is. The biggest problem is that you basically always have to compress and uncompress the file when transferring it. It makes for a good storage format, but a bad format for passing around in ways that need to be constantly read and written.
Plus often we’re talking plain text files being zipped and those plain text formats need to be parsed as well. I’ve written code for systems where we had to do annoying migrations because the serialized format is just so inefficient that it adds up eventually.
Wait till you meet the real evil…
WEBP images. The worst image file format on earth to deal with metadata and timestamps. FFFFUUUUUCK WEBPOOP (and no AVIF please).
XNViewMP is a saviour on all OSes though, thankfully, being the only tool that can batch convert webpoops to any proper image format with preserved metadata.
Atleast with renamed ZIP files, I literally do not need to care as long as 7-Zip or PeaZip is installed, so I can just “open as * archive”. And for video/audio, have MediaInfo installed on any OS. You will thank me someday.
I’m curious, what’s wrong with webp?
WEBP is very weird to convert to other formats and retain metadata. This is not a problem with JPG, PNG and other formats. And only one tool I mentioned solves that problem.
Is that an issue with the format or the currently available tools though?
Google is responsible for this problem. They created WEBP, which was not necessary to adopt, but shoved it in our throats via Chrome saving images as WEBP by default, and making websites that use their cloud as CDN serve WEBPs in general.
When i discovered as a little kid that apk files are actually zips i felt like a detective.
Smh at least use 7z
zstd
or leaveThey both have their use cases. Zstandard is for compression of a stream of data (or a single file), while 7-Zip is actually two parts: A directory structure (like tar) plus a compression algorithm (like LZMA which it uses by default) in a single app.
7-Zip is actually adding zstd support: https://sourceforge.net/p/sevenzip/feature-requests/1580/
Well when using zstd, you tar first, something like
tar -I zstd -cf my_tar.tar.zst my_files/*
. You almost never call zstd directly and always use some kind of wrapper.Sure, you can tar first. That has various issues though, for example if you just want to extract one file in the middle of the archive, it still needs to decompress everything up to that point. Something like 7-Zip is more sophisticated in terms of how it indexes files in the archive, so I’m looking forward to them adding zstd support.
FWIW most of my uses of zstd don’t involve tar, but it’s in things like Borgbackup, database systems, etc.
Yes, definitely. My biggest use is transparent filesystem compression, so I completely agree!
zstd may be newer and faster but lzma still compresses more
Thought I’d check on the Linux source tree tar.
zstd -19
vslzma -9
:❯ ls -lh total 1,6G -rw-r--r-- 1 pmo pmo 1,4G Sep 13 22:16 linux-6.6-rc1.tar -rw-r--r-- 1 pmo pmo 128M Sep 13 22:16 linux-6.6-rc1.tar.lzma -rw-r--r-- 1 pmo pmo 138M Sep 13 22:16 linux-6.6-rc1.tar.zst
About +8% compared to lzma. Decompression time though:
zstd -d -k -T0 *.zst 0,68s user 0,46s system 162% cpu 0,700 total lzma -d -k -T0 *.lzma 4,75s user 0,51s system 99% cpu 5,274 total
Yeah, I’m going with zstd all the way.
Nice data. Thanks for reminding me why I prefer zstd
damn I did not know zstd was that good. Never thought I’d hear myself say this unironically but thanks Facebook
*Thank you engineers who happen to be working at Facebook
Very true, good point
As always, you gotta know both so that you can pick the right tool for the job.
I’ll
gunzip
you to oblivion!
A Donnie Darko deep cut