Really interesting article, I didn't know that the template cloning strategy was configurable. Huge fan of template cloning in general; I've used Neon to do it for "live" integration environments, and I have a golang project https://github.com/peterldowns/pgtestdb that uses templates to give you ~unit-test-speed integration tests that each get their own fully-schema-migrated Postgres database.
Back in the day (2013?) I worked at a startup where the resident Linux guru had set up "instant" staging environment databases with btrfs. Really cool to see the same idea show up over and over with slightly different implementations. Speed and ease of cloning/testing is a real advantage for Postgres and Sqlite, I wish it were possible to do similar things with Clickhouse, Mysql, etc.
Main difference from PG18's approach: you get complete server isolation (useful for testing migrations, different PG configs, etc.) rather than databases sharing one instance.
Let's say there is an architect and he also owns a construction company. This architect, then designs a building and gets it built from of his employees and contractors.
In such cases the person says, I have built this building. People who found companies, say they have built companies. It's commonly accepted in our society.
So even if Claude built for it for GP, as long as GP designed it, paid for tools (Claude) to build it, also tested it to make sure that it works, I personally think, he has right to say he has built it.
If you don't like it, you are not required to use it.
But here's the problem. Five years ago, when someone on here said, "I wrote this non-trivial software", the implication was that a highly motivated and competent software engineer put a lot of effort into making sure that the project meets a reasonable standard of quality and will probably put some effort into maintaining the project.
Today, it does not necessarily imply that. We just don't know.
What an outrageously bad analogy. Everyone involved in that building put their professional reputations and licenses on the line. If that building collapses, the people involved will lose their livelihoods and be held criminally liable.
Meanwhile this vibe coded nonsense is provided “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. We don’t even know if he read it before committing and pushing.
It is the new normal, whether you are against it or not.
If someone used AI, it is a good discussion to see whether they should explicitly disclose it, but people have been using assisted tools, from auto-complete, text expanders, IDE refactoring tools, for a while - and you wouldn't make a comment that they didn't build it. The lines are becoming more blurry over time, but it is ridiculous to claim that someone didn't build something if they used AI tools.
There was a recent wave of such comment on the rust subreddit - exactly in this shape "Oh you mean you built this with AI". This is highly toxic, lead to no discussion, and is literally drove by some dark thought from the commentator. I really hope HN will not jump on this bandwagon and will focus instead on creating cool stuff.
Everybody in the industry is vibecoding right now - the things that stick are due to sufficient quality being pushed on it. Having a pessimistic / judgmental surface reaction to everything as being "ai slop" is not something that I'm going to look forward in my behavior.
And are we really doing this? Do we need to admit how every line of code was produced? Why? Are you expecting to see "built with the influence of Stackoverflow answers" or "google searches" on every single piece of software ever? It's an exercise of pointlessness.
I think you need to start with the following statement:
> We would like to acknowledge the open source people, who are the traditional custodians of this code. We pay our respects to the stack overflow elders, past, present, and future, who call this place, the code and libraries that $program sits upon, their work. We are proud to continue their tradition of coming together and growing as a community. We thank the search engine for their stewardship and support, and we look forward to strengthening our ties as we continue our relationship of mutual respect and understanding
Then if you would kindly say that a Brazilian invented the airplane that would be good too. If you don’t do this you should be cancelled for your heinous crime.
Not sure why this is downvoted. For a critical tool like DB cloning, I‘d very much appreciate if it was hand written. Simply because it means it’s also hand reviewed at least once (by definition).
We wouldn’t have called it reviewed in the old world, but in the AI coding world we’re now in it makes me realise that yes, it is a form of reviewing.
I use Claude a lot btw. But I wouldn’t trust it on mission critical stuff.
It's being downvoted because the commenter is asking for something that is already in the readme. Furthermore, it's ironic that the person raising such an issue is performing the same mistake as they are calling out - neglecting to read something they didn't write.
It‘s at the very bottom of the readme, below the MIT license mention. Yes, it’s there, but very much in the fineprint. I think the easier thing to spot is the CLAUDE.md in the code (and in particular how comprehensive it is).
Again, I love Claude, I use it a ton, but a topic like database cloning requires a certain rigour in my opinion. This repo does not seem to have it. If I had hired a consultant to build a tool like this and would receive this amount of vibe coding, I’d feel deceived. I wouldn’t trust it on my critical data.
App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.
> App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.
This is why migrations are supposed to be backwards compatible
> Eh, DB branching is mostly only necessary for testing - locally
For local DB's, when I break them, I stop the Docker image and wipe the volume mounts, then restart + apply the "migrations" folder (minus whatever new broken migration caused the issue).
Uff, I had no idea that Postgres v15 introduced WAL_LOG and changed the defaults from FILE_COPY. For (parallel CI) test envs, it make so much sense to switch back to the FILE_COPY strategy ... and I previously actually relied on that behavior.
In theory, a database that uses immutable data structures (the hash array mapped trie popularized by Clojure) could allow instant clones on any filesystem, not just ZFS/XFS, and allow instant clones of any subset of the data, not just the entire db. I say "in theory" but I actually built this already so it's not just a theory. I never understood why there aren't more HAMT based databases.
Does datomic have built in cloning functionality? I’ve been wanting to try datomic out but haven’t felt like putting in the work to make a real app lol
For anyone looking for a simple GUI for local testing/development of Postgres based applications. I built a tool a few years ago that simplifies the process: https://github.com/BenjaminFaal/pgtt
Is this basically using templates as "snapshots", and making it easy to go back and forth between them? Little hard to tell from the README but something like that would be useful to me and my team: right now it's a pain to iterate on sql migrations, and I think this would help.
As an aside, I just jumped around and read a few articles. This entire blog looks excellent. I’m going to have to spend some time reading it. I didn’t know about Postgres’s range types.
Aurora clones are copy-on-write at the storage layer, which solves part of the problem, but RDS still provisions you a new cluster with its own endpoints, etc, which is slow ~10 mins, so not really practical for the integration testing use case.
Is anyone aware of something like this for MariaDB?
Something we've been trying to solve for a long time is having instant DB resets between acceptance tests (in CI or locally) back to our known fixture state, but right now it takes decently long (like half a second to a couple seconds, I haven't benchmarked it in a while) and that's by far the slowest thing in our tests.
I just want fast snapshotted resets/rewinds to a known DB state, but I need to be using MariaDB since it's what we use in production, we can't switch DB tech at this stage of the project, even though Postgres' grass looks greener.
LVM snapshots work well. Used it for years with other database tools.. But make sure you allocate enough write space for the COW.. when the write space fills up, LVM just 'drops' the snapshot.
Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.
But how does the reset happen fast, the problem isn't with preventing permanent writes or w/e, it's with actually resetting for the next test. Also using overlayfs will immediately be slower at runtime than tmpfs which we're already doing.
Yeah unfortunately I think that it's not really possible to hit the speed of a TEMPLATE copy with MariaDB. @EvanElias (maintainer of https://github.com/skeema/skeema about this) was looking into it at one point, might consider reaching out to him — he's the foremost mysql expert that I know.
OP here - yes, this is my use case too: integration and regression testing, as well as providing learning environments. It makes working with larger datasets a breeze.
We do this, preview deploys, and migration dry runs using Neon Postgres’s branching functionality - seems one benefit of that vs this is that it works even with active connections which is good for doing these things on live databases.
OP here - still have to try (generally operate on VM/bare metal level); but my understanding is that ioctl call would get passed to the underlying volume; i.e. you would have to mount volume
This is really cool, looking forward to trying it out.
Obligatory mention of Neon (https://neon.com/) and Xata (https://xata.io/) which both support “instant” Postgres DB branching on Postgres versions prior to 18.
Assuming I'd like to replicate my production database for either staging, or to test migrations, etc,
and that most of my data is either:
- business entities (users, projects, etc)
- and "event data" (sent by devices, etc)
where most of the database size is in the latter category, and that I'm fine with "subsetting" those (eg getting only the last month's "event data")
what would be the best strategy to create a kind of "staging clone"? ideally I'd like to tell the database (logically, without locking it expressly): do as though my next operations only apply to items created/updated BEFORE "currentTimestamp", and then:
- copy all my business tables (any update to those after currentTimestamp would be ignored magically even if they happen during the copy)
- copy a subset of my event data (same constraint)
pg_dump has a few annoyances when it comes to doing stuff like this — tricky to select exactly the data/columns you want, and also the dumped format is not always stable. My migration tool pgmigrate has an experimental `pgmigrate dump` subcommand for doing things like this, might be useful to you or OP maybe even just as a reference. The docs are incomplete since this feature is still experimental, file an issue if you have any questions or trouble
Back in the day (2013?) I worked at a startup where the resident Linux guru had set up "instant" staging environment databases with btrfs. Really cool to see the same idea show up over and over with slightly different implementations. Speed and ease of cloning/testing is a real advantage for Postgres and Sqlite, I wish it were possible to do similar things with Clickhouse, Mysql, etc.
Works with any PG version today. Each branch is a fully isolated PostgreSQL container with its own port. ~2-5 seconds for a 100GB database.
https://github.com/elitan/velo
Main difference from PG18's approach: you get complete server isolation (useful for testing migrations, different PG configs, etc.) rather than databases sharing one instance.
Mind you, I'm not saying it's bad per se. But shouldn't we be open and honest about this?
I wonder if this is the new normal. Somebody says "I built Xyz" but then you realize it's vibe coded.
In such cases the person says, I have built this building. People who found companies, say they have built companies. It's commonly accepted in our society.
So even if Claude built for it for GP, as long as GP designed it, paid for tools (Claude) to build it, also tested it to make sure that it works, I personally think, he has right to say he has built it.
If you don't like it, you are not required to use it.
But here's the problem. Five years ago, when someone on here said, "I wrote this non-trivial software", the implication was that a highly motivated and competent software engineer put a lot of effort into making sure that the project meets a reasonable standard of quality and will probably put some effort into maintaining the project.
Today, it does not necessarily imply that. We just don't know.
Meanwhile this vibe coded nonsense is provided “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. We don’t even know if he read it before committing and pushing.
If someone used AI, it is a good discussion to see whether they should explicitly disclose it, but people have been using assisted tools, from auto-complete, text expanders, IDE refactoring tools, for a while - and you wouldn't make a comment that they didn't build it. The lines are becoming more blurry over time, but it is ridiculous to claim that someone didn't build something if they used AI tools.
Everybody in the industry is vibecoding right now - the things that stick are due to sufficient quality being pushed on it. Having a pessimistic / judgmental surface reaction to everything as being "ai slop" is not something that I'm going to look forward in my behavior.
https://github.com/elitan/velo/blame/12712e26b18d0935bfb6c6e...
And are we really doing this? Do we need to admit how every line of code was produced? Why? Are you expecting to see "built with the influence of Stackoverflow answers" or "google searches" on every single piece of software ever? It's an exercise of pointlessness.
> We would like to acknowledge the open source people, who are the traditional custodians of this code. We pay our respects to the stack overflow elders, past, present, and future, who call this place, the code and libraries that $program sits upon, their work. We are proud to continue their tradition of coming together and growing as a community. We thank the search engine for their stewardship and support, and we look forward to strengthening our ties as we continue our relationship of mutual respect and understanding
Then if you would kindly say that a Brazilian invented the airplane that would be good too. If you don’t do this you should be cancelled for your heinous crime.
lol, good one!
We wouldn’t have called it reviewed in the old world, but in the AI coding world we’re now in it makes me realise that yes, it is a form of reviewing.
I use Claude a lot btw. But I wouldn’t trust it on mission critical stuff.
Again, I love Claude, I use it a ton, but a topic like database cloning requires a certain rigour in my opinion. This repo does not seem to have it. If I had hired a consultant to build a tool like this and would receive this amount of vibe coding, I’d feel deceived. I wouldn’t trust it on my critical data.
Or at least I cannot come up with a usecase for prod.
From that perspective, it feels like it'd be a perfect usecase to embrace the LLM guided development jank
App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.
This is why migrations are supposed to be backwards compatible
Raised an issue in my previous pet project for doing concurrent integration tests with real PostgreSQL DBs (https://github.com/allaboutapps/integresql) as well.
Also docker link seems to be broken.
Something we've been trying to solve for a long time is having instant DB resets between acceptance tests (in CI or locally) back to our known fixture state, but right now it takes decently long (like half a second to a couple seconds, I haven't benchmarked it in a while) and that's by far the slowest thing in our tests.
I just want fast snapshotted resets/rewinds to a known DB state, but I need to be using MariaDB since it's what we use in production, we can't switch DB tech at this stage of the project, even though Postgres' grass looks greener.
Then spin up the dB using that image instead of an empty one for every test run.
This implies starting the DB through docker is faster than what you're doing now of course.
1. Have a local data dir with initial state
2. Create an overlayfs with a temporary directory
3. Launch your job in your docker container with the overlayfs bind mount as your data directory
4. That’s it. Writes go to the overlay and the base directory is untouched
Obligatory mention of Neon (https://neon.com/) and Xata (https://xata.io/) which both support “instant” Postgres DB branching on Postgres versions prior to 18.
and that most of my data is either:
- business entities (users, projects, etc)
- and "event data" (sent by devices, etc)
where most of the database size is in the latter category, and that I'm fine with "subsetting" those (eg getting only the last month's "event data")
what would be the best strategy to create a kind of "staging clone"? ideally I'd like to tell the database (logically, without locking it expressly): do as though my next operations only apply to items created/updated BEFORE "currentTimestamp", and then:
- copy all my business tables (any update to those after currentTimestamp would be ignored magically even if they happen during the copy) - copy a subset of my event data (same constraint)
what's the best way to do this?
Something like:
https://www.postgresql.org/docs/current/sql-copy.htmlIt'd be really nice if pg_dump had a "data sample"/"data subset" option but unfortunately nothing like that is built in that I know of.
https://github.com/peterldowns/pgmigrate