EionRobb 11 hours ago

One of the biggest contributors I've had in the past for SQLite blocking was disk fragmentation.

We had some old Android tablets using our app 8 hours a day for 3-4 years. They'd complain if locking errors and slowness but every time they'd copy their data to send to us, we couldn't replicate, even on the same hardware. It wasn't until we bought one user a new device and got them to send us the old one that we could check it out. We thought maybe the ssd had worn out over the few years of continual use but installing a dev copy of our app was super fast. In the end what did work was to "defrag" the db file by copying it to a new location, deleting the original, then moving it back to the same name. Boom, no more "unable to open database" errors, no more slow downs.

I tried this on Jellyfin dbs a few months ago after running it for years and then suddenly running into performance issues, it made a big difference there too.

  • simscitizen 2 hours ago

    Copying the file likely forces the creation of a new one with no or lower filesystem fragmentation (e.g. a 1MB file probably gets assigned to 1MB of consecutive FS blocks). Then those FS blocks likely get assigned to flash dies in a way that makes sense (i.e. the FS blocks are evenly distributed across flash dies). This can improve I/O perf by some constant factor. See https://www.usenix.org/system/files/fast24-jun.pdf for instance for more explanation.

    I would say that the much more common degradation is caused by write amplification due to a nearly full flash drive (or a flash drive that appears nearly full to the FTL because the system doesn't implement some TRIM-like mechanism to tell the FTL about free blocks). This generally leads to systemwide slowdown though rather than slowdown accessing just one particular file.

    This was especially prevalent on some older Android devices which didn't bother to implement TRIM or an equivalent feature (which even affected the Google devices, like the Nexus 7).

  • izacus 7 hours ago

    That's much more likely flash degradation than actual fragmentation. Did you use cheap tablets with eMMC storage?

    • EionRobb 2 hours ago

      We had that thought too. I'll have to try dig out what the tablets were to find out exactly what type - this would have been 3 or 4 years ago now. We sort of ruled that out because:

      The other workaround to get a speed boost was the user to uninstall and reinstall the app (and then wait for all the data to download again) but that didn't fly because the users would delete before they'd synced off all their data and then data would go missing.

      This was all despite having VACUUM running whenever the app started.

      Whether it was bad flash or no, we still had to try resolve it as the client wouldn't buy new hardware until we could prove that we had the knowledge to make the problem go away first :/

    • georgemcbay 4 hours ago

      > That's much more likely flash degradation than actual fragmentation. Did you use cheap tablets with eMMC storage?

      My understanding of the parent reply's situation is that this was happening on the tablets of their users, so it kinda doesn't matter that it can be avoided by not using cheap tablets.

      Most apps aren't in a position to tell their users that they are on their own when they run into what feels like an unreasonable app slowdown because they didn't buy a good enough device to run it on, especially when they've previously experienced it running just fine.

      If all their apps feel like crap on that tablet, sure, that might fly... but if its only your app (or only a small set of apps that use SQLite in the same way the OP's company did) that feels like crap after a while, that's effectively a you problem (to solve) even if its not really a you problem.

      In any case, its an interesting data point and could be very useful information to others who run into similar issues.

  • Multicomp 11 hours ago

    Would the SQLite vacuum function help with that?

    • mceachen 11 hours ago

      You can VACUUM INTO, ~~but standard vacuum won’t rewrite the whole db~~ (vacuum rewrites the whole db)

      https://sqlite.org/lang_vacuum.html

      (Edit: if multiple processes are concurrently reading and writing, and one process vacuums, verify that the right things happen: specifically, that concurrent writes from other processes during a vacuum don’t get erased by the other processes’ vacuum. You may need an external advisory lock to avoid data loss).

      • return_to_monke 10 hours ago

        > You can VACUUM INTO, but standard vacuum won’t rewrite the whole db.

        This is not true. From the link you posted:

        > The VACUUM command works by copying the contents of the database into a temporary database file and then overwriting the original with the contents of the temporary file.

      • teddyh 2 hours ago

        > You can VACUUM INTO, ~~but standard vacuum won’t rewrite the whole db~~ (vacuum rewrites the whole db)

        HN does not support whatever markup you are trying to use. You have to use Unicode:

        “You can VACUUM INTO, b̶u̶t̶ ̶s̶t̶a̶n̶d̶a̶r̶d̶ ̶v̶a̶c̶u̶u̶m̶ ̶w̶o̶n̶’̶t̶ ̶r̶e̶w̶r̶i̶t̶e̶ ̶t̶h̶e̶ ̶w̶h̶o̶l̶e̶ ̶d̶b̶ (vacuum rewrites the whole db)”

  • didip 10 hours ago

    This is fascinating. What would be the solution for this? You can’t ask users to defrag.

    • DANmode 10 hours ago

      Perform the file operation, after zipping the existing db as a backup, and leaving the original where it sits.

      Success, performance increase.

      Failure, no change.

      • axitanull 2 hours ago

        Forgive my lack of knowledge, but how is simply zipping the original file would "defrag" the file?

        Shouldn't the file be moved into different disk fragment first, for that to happen?

asa400 17 hours ago

In SQLite, transactions by default start in “deferred” mode. This means they do not take a write lock until they attempt to perform a write.

You get SQLITE_BUSY when transaction #1 starts in read mode, transaction #2 starts in write mode, and then transaction #1 attempts to upgrade from read to write mode while transaction #2 still holds the write lock.

The fix is to set a busy_timeout and to begin any transaction that does a write (any write, even if it is not the first operation in the transaction) in “immediate” mode rather than “deferred” mode.

https://zeroclarkthirty.com/2024-10-19-sqlite-database-is-lo...

  • simonw 16 hours ago

    Yeah I read the OP and my first instinct was that this is SQLITE_BUSY. I've been collecting posts about that here: https://simonwillison.net/tags/sqlite-busy/

    • gwking 8 hours ago

      One tidbit that I don't see mentioned here yet is that ATTACH requires a lock. I just went looking for the documentation about this and couldn't find it, especially for WAL mode (https://www.sqlite.org/lockingv3.html mentions the super-journal, but the WAL docs do not mention ATTACH at all).

      I have a python web app that creates a DB connection per request (not ideal I know) and immediately attaches 3 auxiliary DBs. This is a low traffic site but we have a serious reliability problem when load increases: the ATTACH calls occasionally fail with "database is locked". I don't know if this is because the ATTACH fails immediately without respecting the normal 5 second database timeout or what. To be honest I haven't implemented connection pooling yet because I want to understand what exactly causes this problem.

  • summarity 17 hours ago

    I've always tried to avoid situations that could lead to SQLITE_BUSY. SQLITE_BUSY is an architecture smell. For standard SQLite in WAL, I usually structure an app with a read "connection" pool, and a single-entry write connection pool. Making the application aware of who _actually_ holds the write lock gives you the ability to proactively design access patterns, not try to react in the moment, and to get observability into lock contention, etc.

    • simonw 16 hours ago

      Even with that pattern (which I use too) you still need to ensure those write operations always start a transaction at the beginning in order to avoid SQLITE_BUSY.

      • summarity 14 hours ago

        Yes, indeed. In my apps, which are mostly Nim, my pool manager ensures this always happens - along with a host of other optimizations. I often start with barebones SQLite and then later switch to LiteSync (distributed SQLite with multi-master replication), so I keep the lock management at the app level to adapt to whatever backend I'm using.

        • probst 11 hours ago

          I am really curious about LiteSync. Any chance you could share a bit on your experiences with it (recognising it’s somewhat off-topic…). Do you run with multiple primaries? What sort of use cases do you reach to it for? Conflict resolution seems a bit simplistic at first glance (from the perspective of someone very into CRDTs), have you experienced any issues as a result of that?

    • mickeyp 17 hours ago

      I mean, you're not wrong, and that is one way to solve it, but the whole point of a sensibly-designed WAL -- never mind database engine -- is that you do not need to commit to some sort of actor model to get your db to serialise writes.

      • sethev 15 hours ago

        These are performance optimizations. SQLite does serialize writes. Avoiding concurrent writes to begin with just avoids some overhead on locking.

        • mickeyp 15 hours ago

          "performance optimisation" --- yeees, well, if you don't care about data integrity between your reads and writes. Who knows when those writes you scheduled really get written. And what of rollbacks due to constraint violations? There's we co-locate transactions with code: they are intertwined. But yes, a queue-writer is fine for a wide range of tasks, but not everything.

          It's that we need to contort our software to make sqlite not suck at writes that is the problem.

          • jitl 9 hours ago

            > Who knows when those writes you scheduled really get written

            I await the write to complete before my next read in my application logic, same as any other bit of code that interacts with a database or does other IO. Just because another thread handles interacting with the writer connection, doesn't mean my logic thread just walks away pretending the write finished successfully in 0ms.

          • sethev 15 hours ago

            This is just FUD. The reason SQLite does locking to begin with is to avoid data corruption. Almost every statement this blog post makes about concurrency in SQLite is wrong, so it's little surprise that their application doesn't do what they expect.

            >Who knows when those writes you scheduled really get written

            When a commit completes for a transaction, that transaction has been durably written. No mystery. That's true whether you decide to restrict writes to a single thread in your application or not.

            • catlifeonmars 14 hours ago

              > When a commit completes for a transaction, that transaction has been durably written. No mystery. That's true whether you decide to restrict writes to a single thread in your application or not.

              Usually this is true but there are edge cases for certain journaled file systems. IIRC sqlite.org has a discussion on this.

              • zimpenfish 14 hours ago

                > there are edge cases for certain journaled file systems. IIRC sqlite.org has a discussion on this.

                Can't currently find it but I guess it comes under the "if the OS or hardware lies to SQLite, what can it do?" banner?

            • mickeyp 14 hours ago

              You are talking about low level stuff like syncing to the filesystem; that data is journalled and ensuring atomicity is maintained and I am in actual fact not.

              Dislocating DML from the code that triggers it creates many problems around ensuring proper data integrity and it divorces consistent reads of uncommitted data that you may want to tightly control before committing. By punting it to a dedicated writer you're removing the ability to ensure serialised modification of your data and the ability to cleanly react to integrity errors that may arise. If you don't need that? Go ahead. But it's not fud. We build relational acid compliant databases this way for a reason

              • sethev 14 hours ago

                Oh, I think you're picturing executing your transaction logic and then sending writes off to a background queue. I agree, that's not a general strategy - it only works for certain cases.

                I just meant that if you can structure your application to run write transactions in a single thread (the whole transaction and it's associated logic, not just deferring writing the end result to a separate thread) then you minimize contention at the SQLite level.

        • ncruces 15 hours ago

          SQLite, for the most part, uses polling locks. That means it checks if a lock is available to be taken, and if it's not, it sleeps for a bit, then checks again, until this times out.

          This becomes increasingly inefficient as contention increases, as you can easily get into a situation where everyone is sleeping, waiting for others, for a few milliseconds.

          Ensuring all, or most, writes are serialized, improves this.

  • liuliu 4 hours ago

    Note that busy_timeout is not applicable to SQLite in this case (the SQLITE_BUSY issued immediately, no wait in this case).

    Also this is because WAL mode (and I believe only for WAL mode, since there is really no concurrent reads in the other mode).

    The reason is because pages in WAL mode appended to a single log file. Hence, if you read something inside a BEGIN transaction, later wants to mutate something else, there could be another page already appended and potentially interfere with the strict serializable guarantee for WAL mode. Hence, SQLite has to fail at the point of lock upgrade.

    Immediate mode solves this problem because at BEGIN time (or more correctly, at the time of first read in that transaction), a write lock is acquired hence no page can be appended between read -> write, unlike in the deferred mode.

  • mickeyp 17 hours ago

    Indeed. Everyone who uses sqlite will get burnt by this one day and spend a lot of time chasing down errant write-upgraded transactions that cling on for a little bit longer than intended.

  • tlaverdure 17 hours ago

    Yes, these are both important points. I didn't see any mention of SQLITE_BUSY in the blog post and wonder if that was never configured. Something that people miss quite often.

  • chasil 14 hours ago

    In an Oracle database, there is only one process that is allowed to write to tablespace datafiles, the DBWR (or its slaves). Running transactions can write to ram buffers and the redo logs only.

    A similar design for SQLite would design for only one writer, with all other processes passing their SQL to it.

  • kijin 15 hours ago

    Wouldn't that "fix" make the problem worse on the whole, by making transactions hold onto write locks longer than necessary? (Not trying to disagree, just curious about potential downsides.)

    • asa400 10 hours ago

      It’s a reasonable question!

      In WAL mode, writers and readers don’t interfere with each other, so you can still do pure read queries in parallel.

      Only one writer is allowed at a time no matter what, so writers queue up and you have to take the write lock at some point anyway.

      In general, it’s hard to say without benchmarking your own application. This will get rid of SQLITE_BUSY errors firing immediately in the situation of read/write/upgrade-read-to-write scenario I described, however. You’d be retrying the transactions that fail from SQLITE_BUSY anyway, so that retrying is what you’d need to benchmark against.

      It’s a subtle problem, but I’d rather queue up writes than have to write the code that retries failed transactions that shouldn’t really be failing.

  • BobbyTables2 17 hours ago

    Thats the best explanation I’ve seen of this issue.

    However, it screams of a broken implementation.

    Imagine if Linux PAM logins randomly failed if someone else was concurrently changing their password or vice versa.

    In no other application would random failures due to concurrency be tolerated.

    SQLite is broken by design; the world shouldn’t give them a free pass.

    • asa400 17 hours ago

      SQLite is a truly remarkable piece of software that is a victim both of its own success and its unwavering commitment to backward compatibility. It has its quirks. There are definitely things we can learn from it.

thayne 14 hours ago

There seem to be some misunderstandings in this:

> If your application fully manages this file, the assumption must be made that your application is the sole owner of this file, and nobody else will tinker with it while you are writing data to it.

Kind of, but sqlite does locking for you, so you don't have to do anything to ensure your process is the only one writing to the db file.

> [The WAL] allows multiple parallel writes to take place and get enqueued into the WAL.

The WAL doesn't allow multiple parallel writes. It just allows reads to be concurrent with a single write transaction.

  • Sammi 12 hours ago

    Yeah... I adore Sqlite and upvote anything about it, but I couldn't upvote this article because it was just so poorly informed. It gets the very basics on sqlite concurrency wrong.

mickeyp 17 hours ago

SQLite is a cracking database -- I love it -- that is let down by its awful defaults in service of 'backwards compatibility.'

You need a brace of PRAGMAs to get it to behave reasonably sanely if you do anything serious with it.

  • tejinderss 16 hours ago

    Do you know any good default PRAGMAs that one should enable?

    • mickeyp 16 hours ago

      These are my PRAGMAs and not your PRAGMAs. Be very careful about blindly copying something that may or may not match your needs.

          PRAGMA foreign_keys=ON
          PRAGMA recursive_triggers=ON
          PRAGMA journal_mode=WAL
          PRAGMA busy_timeout=30000
          PRAGMA synchronous=NORMAL
          PRAGMA cache_size=10000
          PRAGMA temp_store=MEMORY
          PRAGMA wal_autocheckpoint=1000
          PRAGMA optimize <- run on tx start
      
      Note that I do not use auto_vacuum for DELETEs are uncommon in my workflows and I am fine with the trade-off and if I do need it I can always PRAGMA it.

      defer_foreign_keys is useful if you understand the pros and cons of enabling it.

      • mikeocool 16 hours ago

        Using strict tables is also a good thing to do, if you value your sanity.

      • porridgeraisin 15 hours ago

        You should pragna optimize before TX end, not at tx start.

        Except for long lived connections where you do it periodically.

        https://www.sqlite.org/lang_analyze.html#periodically_run_pr...

        • masklinn 15 hours ago

          Also foreign_keys has to be set per connection but journal_mode is sticky (it changes the database itself).

          • porridgeraisin 14 hours ago

            Yes, if journal_mode was not sticky, a new process opening the db would not know to look for the wal and shm files and read the unflushed latest data from there. On the other hand, foreign key enforcement has nothing to do with the file itself, it's a transaction level thing.

            In any case, there is no harm in setting sticky pragmas every connection.

      • adzm 16 hours ago

        Really, no mmap?

        • metrix 16 hours ago

          I'm curious what your suggest mmap pragma would be.

  • mkoubaa 16 hours ago

    Seems like it's asking to be forked

    • pstuart 14 hours ago

      The real fork is DuckDB in a way, it has SQLite compatibility and so much more.

      The SQLite team also has 2 branches that address concurrency that may someday merge to trunk, but by their very nature they are quite conservative and it may never happen unless they feel it passes muster.

      https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co... https://sqlite.org/hctree/doc/hctree/doc/hctree/index.html

      As to the problem that prompted the article, there's another way of addressing the problem that is kind of a kludge but is guaranteed to work in scenarios like theirs: Have each thread in the parallel scan write to it's own temporary database and then bulk import them once the scan is done.

      It's easy to get hung up on having "a database" but sharding to different files by use is trivial to do.

      Another thing to bear in mind with a lot of SQLite use cases is that the data is effectively read only save for occasional updates. Read only databases are a lot easier to deal with regarding locking.

      • jitl 13 hours ago

        DuckDB is similar as an in process SQL database, but lacking btree-style ordered indexes makes it a poor performer in key lookups and order-by / range scans if your table is any size larger than trivial.

        It’s the classic OLAP (DuckDB) vs OLTP (SQLite) trade off between the two. DuckDB is very good at many things but most applications that need a traditional SQL DB will probably not perform well if you swap it over to DuckDB.

        • geysersam 13 hours ago

          Duckdb has optional adaptive radix tree indexing (https://duckdb.org/docs/stable/sql/indexes.html)

          • jitl 8 hours ago

            Oops, I stand corrected!

            What I remember about our evaluation of DuckDB in 2024 concluded that (1) the major limitations were lack of range-scan and index-lookup performance (maybe w/ joins? or update where?), and (2) the DuckDB Node.js module segfaulted too much. Perhaps the engineers somehow missed the ART index it could also be the restriction that data fit in memory to create an index on it (our test dataset was about 50gb)

        • Kinrany 13 hours ago

          That's surprising, surely OLAP use cases also need key lookups?

      • Kinrany 13 hours ago

        > Read only databases are a lot easier to deal with regarding locking.

        "A lot easier" sounds like an understatement. What's there to lock when the data is read only?

ricardobeat 16 hours ago

Articles like this leave me with an uneasy feeling that the “solutions” are just blind workarounds - more debugging/research should be able to expose exactly what the problem is, now that would be something worth sharing.

  • kccqzy 11 hours ago

    Articles like this give me the feeling that the author did a little bit of research and shared a suboptimal solution, and was hoping that experts on HN would present better solutions. Wasn't there a saying about how the best way to get correct answers is to post not just the question but the wrong answers to it?

ddtaylor 14 hours ago

I have encountered this problem on Jellyfin before. It works like a dream, but there are some very strange circumstances that can cause the database to become locked and then just not work until I restart the docker container. If I check the logs it just says stuff about the database being locked. It happens quite rarely and seems to be when we fidget in the menus on the smart TV like starting to watch a show to realize it's the wrong episode as you click the button, then spam the back button, etc.

Leherenn 16 hours ago

A bit off topic, but there seems to be quite a few SQLite experts here.

We're having troubles with memory usage when using SQLite in-memory DBs with "a lot" of inserts and deletes. Like maybe inserting up to a 100k rows in 5 minutes, deleting them all after 5 minutes, and doing this for days on end. We see memory usage slowly creeping up over hours/days when doing that.

Any settings that would help with that? It's particularly bad on macOS, we've had instances where we reached 1GB of memory usage according to Activity Monitor after a week or so.

  • kachapopopow 16 hours ago

    sounds like normal behavior of adjusting buffers to better fit the usecase, not sure if it applies to sqlite or if sqlite even implements dynamic buffers.

  • pstuart 14 hours ago

    If you're deleting all rows you can also just drop the table and recreate it.

rpcope1 7 hours ago

Do these guys really not understand that WAL is still single writer multi reader? You could do concurrent (but not parallel) write DML in both the normal and WAL journaling models. WAL alleviates read transactions being blocked by writers but you still have to lock it down to a single writer. It would be nice if SQLite3 had full blown MVCC, but it still works if you understand it.

mangecoeur 16 hours ago

Sqlite is a great bit of technology but sometimes I read articles like this and think, maybe they should have used postgres. I you don’t specifically need the “one file portability” aspect of sqlite, or its not embedded (in which case you shouldn’t have concurrency issues), Postgres is easy to get running and solves these problems.

  • heavyset_go an hour ago

    I run Jellyfin in a multi-arch cluster because I hate myself, and this would force me to think about where Jellyfin/Postgres is deployed because Postgres databases aren't portable.

    I already had to do that for my authoritative PG deployment, and my media manager shouldn't require a full RDBMS.

    Using SQLite for Jellyfin has made running it wherever really, really easy, same thing with doing backups and lazy black box debugging.

  • abound 15 hours ago

    Jellyfin is a self-hostable media server. If they "used Postgres", that means anyone who runs it needs Postgres. I think SQLite is the better choice for this kind of application, if one is going to choose a single database instead of some pluggable layer

    • tombert 9 hours ago

      I share my Jellyfin with about a dozen people, and it's not weird to have several people streaming at the same time. I have a two gigabit connection so bandwidth isn't generally an issue, but I've had issues when three people all streaming a VC-1 encoded video to H264 in software.

      This is something that I think I could fairly easily ameliorate if I could simply load-balance the application server by user, but historically (with Emby), I've not been able to do that due to SQLite locking not allowing me to run multiple instances pointing to the same config instance.

      There's almost certainly ways to do this correctly with SQLite but if they allowed for using almost literally any other database this would be a total non-issue.

      ETA:

      For clarification if anyone is reading this, all this media LEGALLY OBTAINED with PERMISSION FROM THE COPYRIGHT HOLDER(S).

      • reddalo 8 hours ago

        Yeah, I'm sure those twelve people love watching your vacation clips all the time ;)

    • reddalo 8 hours ago

      They're actually planning on migrating to Postgres in a future release:

      >[...] it also opens up new possibilities - not officially yet, but soon - for running Jellyfin backed by "real" database systems like PostgreSQL, providing new options for redundancy, load-balancing, and easier maintenance and administration. The future looks very bright!

      https://jellyfin.org/posts/jellyfin-release-10.11.0/

    • morshu9001 14 hours ago

      Exactly, there are use cases where SQLite makes sense but you also want to make it faster. I really don't get why there isn't a more portable Postgres.

      • zie 10 hours ago

        There is, you can even run PG under wasm if you are desperate. :)

        SQLite is probably the better option here and in most places where you want portability though.

  • petters 11 hours ago

    Jellyfin is mostly for a single household, right? Sqlite should be much more than sufficient for Jellyfin (if used correctly). Unfortunately, reading this article you get the impression that they are not using it optimally

    • nick_ 9 hours ago

      Agreed. How can a media file sharing app possibly saturate Sqlite's write limit? I would use an app-level global lock on all writes to Sqlite.

  • thayne 15 hours ago

    Using postgres would make it significantly more complicated for Jellyfin users to install and set up Jellyfin. And then users would need to worry about migrating the databases when PostgreSQL has a major version upgrade. An embedded database like sqlite is a much better fit for something like Jellyfin.

    • throwaway894345 14 hours ago

      As a Jellyfin user, this hasn’t been my experience. I needed to do a fair bit of work to make sure Jellyfin could access its database no matter which node it was scheduled onto and that no more than one instance ever accessed the database at the same time. Jellyfin by far required more work to setup maintainably than any of the other applications I run, and it is also easily the least reliable application. This isn’t all down to SQLite, but it’s all down to a similar set of assumptions (exactly one application instance interacting with state over a filesystem interface).

      • heavyset_go an hour ago

        Jellyfin isn't a Netflix replacement, it's a desktop application that's a web app by necessity. Treat it like a desktop app and you won't have these issues.

      • stormbeard 12 hours ago

        Jellyfin isn’t meant to be some highly available distributed system, so of course this happens when you try to operate it like one. The typical user is not someone trying to run it via K8s.

        • throwaway894345 11 hours ago

          Yeah, I agree, though making software that can run in a distributed configuration is a matter of following a few basic principles, and would be far less work than what the developers have spent chasing down trying to make SQLite work for their application.

          The effort required to put an application on Kubernetes is a pretty good indicator of software quality. In other words, I can have a pretty good idea about how difficult a software is to maintain in a single-instance configuration by trying to port it to Kubernetes.

      • thayne 13 hours ago

        Is running multiple nodes a typical way to run Jellyfin through? I would expect that most Jellyfin users only run a single instance at a time.

        • throwaway894345 11 hours ago

          Yes, but you have to go out of your way when writing software to make it so the software can only run on one node at a time. Or rather, well-architected software should require minimal, isolated edits to run in a distributed configuration (for example, replacing SQLite with a distributed SQLite).

          • thayne 7 hours ago

            That's just not true. Distributed software is much more complicated and difficult than non-distributed software. Distributed systems have many failure modes that you don't have to worry about in non-distributed systems.

            Now maybe you could have an abstraction layer over your storage layer that supports multiple data stores, including a distributed one. But that comes with tradeoffs, like being limited to the least common denominator of features of the data stores, and having to implement the abstraction layer for multiple data stores.

            • throwaway894345 3 hours ago

              I’m a distributed systems architect. I design, build, and operate distributed systems.

              > Distributed systems have many failure modes that you don't have to worry about in non-distributed systems.

              Yes, but as previously mentioned, those failure modes are handled by abiding a few simple principles. It’s also worth noting that multiprocess or multithreaded software have many of the same failure modes, including the one discussed in this post. Architecting systems as though they are distributed largely takes care of those failure modes as well, making even single-node software like Jellyfin more robust.

              > Now maybe you could have an abstraction layer over your storage layer that supports multiple data stores, including a distributed one. But that comes with tradeoffs, like being limited to the least common denominator of features of the data stores, and having to implement the abstraction layer for multiple data stores.

              Generally I just target storage interfaces that can be easily distributed—things like Postgres (or maybe dqlite?) for SQL databases or an object storage API instead of a filesystem API. If you build a system like it could be distributed one day, you’ll end up with a simpler, more modular system even if you never scale to more than one node (maybe you just want to take advantage of parallelism on your single node, as was the case in this blog post).

  • amaccuish 13 hours ago

    Their whole recent rewrite of the DB code (to Entity Framework) is to allow the user choice of DB in future.

  • zeroq 5 hours ago

    Sqlite has so many small benefits for tiny projects it can't be easily replaced.

    It's like saying "oh, you want to visit Austrian country side next month and you're asking for advice for best tent? How about you build a cabin instead?".

  • o11c 11 hours ago

    Even with postgres, you don't have to use the system instance; there's nothing stopping you from running the server as a child process.

    You probably need to support this for your testsuite anyway.

    • hamandcheese 10 hours ago

      Maybe in theory. In practice, most people who need Postgres for their test suite will boot an instance in a docker container in CI, and maybe just assume a system version is available for local dev.

  • bambax 15 hours ago

    Jellyfin is a media server app that gets installed on a great variety of platforms and while it would certainly be possible to add a postgres server to the install, the choice of sqlite is more than justified here IMHO.

  • throwaway894345 15 hours ago

    As a user of Jellyfin, I’m very sad that it doesn’t just use Postgres. I basically have to run an NFS system just for Jellyfin so that its data can be available to it no matter which node it gets scheduled on and also that there are never multiple instances running at the same time, even during deployments (e.g., I need to take care that deployments completely stop the first Jellyfin instance before starting the subsequent instance). There are so many unnecessary single points of failure, and Postgres would make a pretty big one go away (never mind addressing the parallelism problems that plague the developers).

    Jellyfin is by far the least reliable application I run, but it also seems to be best in class.

    • ants_everywhere 7 hours ago

      I have the same experience. SQLite has been a source of most Jellyfin problems, and Jellyfin has more problems than the rest of the ~ 150 containers I run regularly.

      A stateless design where a stateless jellyfin server talks to a postgres database would be simpler and more robust.

      • throwaway894345 3 hours ago

        Yeah, honestly I’m kind of thinking about a media server architecture that has a stateless media server that vends links to pre-transcoded media in object storage (which video players would source from), since pretty much anything can handle mp4/h264/acc video. Maybe in the future I could add on some on-the-fly transcoding (which would happen on a dedicated cluster, reading and writing to object storage), but that seems like a pretty big undertaking.

    • KingMob 12 hours ago

      I gave up on Jellyfin after media library updates kept hanging on certain video files, and switched to the original Emby it was forked from (iiuc).

      Emby has a scarily-ancient install process, but it's been working just fine with less hassle.

  • eduction 16 hours ago

    100%. I specifically clicked for the “why you should care” and was disappointed I could not find it.

    I certainly don’t mind if someone is pushing the limits of what SQLite is designed for but personally I’d just rather invest the (rather small) overhead of setting up a db server if I need a lot of concurrency.

slashdave 9 hours ago

I am a little confused, but maybe I am missing some context? Wouldn't using a proper database be a lot easier than all of this transaction hacking? I mean, is Postgres that hard to use?

tombert 11 hours ago

Does this mean I can finally load-balance with multiple Jellyfin instances?

A million years ago, back when I still used Emby, I was annoyed that I couldn't use it across multiple in Docker Swarm due to locking of SQLite. It really annoyed me, enough to where I started (but never completed) a driver to change the DB to postgres [1]. I ended up moving everything over to a single server, which is mostly fine unless I have multiple people transcoding at the same time.

If this is actually fixed then I might have an excuse to rearchitect my home server setup again.

[1] https://github.com/Tombert/embypostgres

  • Yodel0914 8 hours ago

    Jellyfin have just gone through a massive refactor and pulled all their data access code into EFCore. This opens the path for supporting different RBDMSs which think is next on their list.

yread 13 hours ago

I'm a bit confused. The point of this article is that the author used .NET Interceptors and TagWith to somehow tag his EF Core operations so that they make their own busy_timeout (which EF Core devs think is not necessary https://github.com/dotnet/efcore/issues/28135 ) or do a horrible global lock? No data is presented on how it improved things if it did. Nor is it described which operations were tagged with what. The only interesting thing about it are the interceptors but that's somehow not discussed in HN's comments at all.

dv35z 16 hours ago

Curious if anyone has strategies on how to perform parallel writes to an SQLite database using Python's `multiprocessing` Pool.

I am using it to loop through a database of 11,000 words, hit an HTTP API for each (ChatGPT) and generate example sentences for the word. I would love to be able to asynchronously launch these API calls and have them come back and update the database row when ready, but not sure how to handle the database getting hit by all these writes from (as I understand it) multiple instances of the same Python program/function.

  • zie 16 hours ago

    Technically SQLite can only have 1 writer at any given moment, but it can appear like it works across multiple writers and let it serialize the calls for you.

    By default SQLite will not do what you want out of the box. You have to turn on some feature flags(PRAGMA) to get it to behave for you. You need WAL mode, etc read:

    * https://kerkour.com/sqlite-for-servers * https://zeroclarkthirty.com/2024-10-19-sqlite-database-is-lo...

    My larger question is why multiprocessing? this looks like an IO heavy workload, not CPU bound, so python asyncio or python threads would probably do you better.

    multiprocessing is when your resource hog is CPU(probably 1 python process per CPU), not IO bound.

    • dv35z 11 hours ago

      I will check into `asyncio` and Python threads. I used multiprocessing as my first project into asynchronous programming. The previous use-case was using Python + multiprocessing to run MacOS `say` (using Python subprocess) - so I could invoke it 10-20 times simultaneously on my computer, rather than waiting for each to complete. I experimented a bit with how many concurrent processes to run (using `time` to clock how long the runs were).

  • sethev 16 hours ago

    Have you tried it?

    What you're describing sounds like it would work fine to me. The blog post is misleading imho - it implies that SQLite doesn't handle concurrency at all. In reality, you can perform a bunch of writes in parallel and SQLite will handle running them one after the other internally. This works across applications and processes, you just need to use SQLite to interact with the database. The blog post is also misleading when it implies that the application has to manage access to the database file in some way.

    Yes, it's correct that only one of those writes will execute at a time but it's not like you have to account for that in your code, especially in a batch-style process like you're describing. In your Python code, you'll just update a row and it will look like that happens concurrently with other updates.

    I'll bet that your call to ChatGPT will take far longer than updating the row, even accounting for time when the write is waiting for its turn in SQLite.

    Use WAL-mode for the best performance (and to reduce SQLITE_BUSY errors).

    • dv35z 11 hours ago

      I haven't tried it yet - async processing (and even using SQLite) is new to me, so I'm trying to figure out solution patterns which work for the now, and also I can continue to invest my knowledge in to solve future problems.

      I will look into WAL mode. I am enjoying using SQLite (and aware that its not the solution for everything), and have several upcoming tasks which I'm planning to use async stuff - and yes, trying to find the balance between how to handle those async tasks (Networky HTTP calls being different than running `ffmpeg` locally).

  • mickeyp 16 hours ago

    Edit: disregard. I read it as he'd done it and had contention problems.

    You can't. You have a single writer - it's one of the many reasons sqlite is terrible for serious work.

    You'll need a multiprocessing Queue and a writer that picks off sentences one by one and commits it.

    • hruk 16 hours ago

      This is just untrue - the naive implementation (make the API call, write a single row to the db) will work fine, as transactions are quite fast on modern hardware.

      What do you consider "serious" work? We've served a SaaS product from SQLite (roughly 300-500 queries per second at peak) for several years without much pain. Plus, it's not like PG and MySQL are pain-free, either - they all have their quirks.

      • mickeyp 15 hours ago

        Edit: disregard. I read it as he'd done it and had contention problems.

        I mean it's not if he's got lock contention from BUSY signals, now is it, as he implies. Much of his issues will stem from transactions blocking each other; maybe they are long-lived, maybe they are not. And those 3-500 queries --- are they writes or reads? Because reads is not a problem.

        • hruk 15 hours ago

          Roughly 80/20 read to write. On the instance's gp3 EBS volume (which is pretty slow), we've pushed ~700 write transactions per second without much problem.

          • mickeyp 15 hours ago

            For small oltp workloads the locking is not going to be a problem. But stuff that holds the write lock for some measurable fraction of a second even will gum things up real fast. Transactions that need it for many seconds? You'll quickly be dead in the water.

fitsumbelay 13 hours ago

Very helpful and a model for how technical posts should be written: clarity, concision, anchor links that summarize the top lines. It was a pleasure to read.

ignoramous 16 hours ago

  So, I decided on three locking strategies:

  No-Lock
  Optimistic locking
  Pessimistic locking

  As a default, the no-lock behavior does exactly what the name implies. Nothing. This is the default because my research shows that for 99% all of this is not an issue and every interaction at this level will slow down the whole application.
Aren't the mutexes in the more modern implementations (like Cosmo [0]) & runtimes (like Go [1]) already optimized so applications can use mutexes fearlessly?

[0] https://justine.lol/mutex/

[1] https://victoriametrics.com/blog/go-sync-mutex/

porridgeraisin 16 hours ago

> So an application that wants to use SQLite as its database needs to be the only one accessing it.

No. It uses OS level locks. fcntl(). You can access it from how many ever processes. The only rule is, single writer (at a time).

> When another part of the application wants to read data, it reads from the actual database, then scans the WAL for modifications and applies them on the fly.

Also wrong. WAL does not contain modifications, it contains the full pages. A reader checks the WAL, and if it finds the page it won't even read the DB. It's a bit like a cache in this sense, that's why shared cache mode was discouraged in favour of WAL (in addition to its other benefits). Multiple versions of a page can exist in the WAL (from different transactions), but each reader sees a consistent snapshot which is the newest version of each page up to its snapshot point.

> For some reason on some systems that run Jellyfin when a transaction takes place the SQLite engine reports the database is locked and instead of waiting for the transaction to be resolved the engine refuses to wait and just crashes

You can set a timeout for this - busy_timeout.

> Reproducible

There's nothing unreliable here. It will fail every single time. If it doesn't, then the write finished too fast for the read to notice and return SQLite busy. Not sure what they are seeing.

> The solution

So they've reimplemented SQLites serialisation, as well as SQLites busy_timeout in C#?

> "engine", "crash"

Sqlite is not an engine. It's literally functions you link into your app. It also doesn't crash, it returns sqlite_busy. Maybe EF throws an exception on top of that.

I have to say, this article betrays a lack of fundamental DB knowledge and only knowing ORMs. Understand the DB and then use the ORM on top of it. Or atleast, don't flame the DB (context: blame-y tone of article) if you haven't bothered to understand it. Speaking of ORMs ...

> EF Core

You're telling me that burj khalifa of abstractions doesn't have room to tune SQLite to what web devs expect?

  • yellow_lead 15 hours ago

    C# devs*

    • porridgeraisin 14 hours ago

      Didn't mean to belittle any 'X' developer. By "what web devs expect", I meant the settings that are usually used for databases in web apps.