Description
Bug report
Bug description:
Hey there.
(please read till the end... it does work with the sqlite3
utility, but not with Python’s sqlite
)
With 3.13.2 on Debian unstable, it seems that sqlite.connect()
’s timeout doesn't work as it should.
I have a small demo program:
#!/usr/bin/python3
import sqlite3
import sys
import time
print(f"con tmout: " + sys.argv[2])
con = sqlite3.connect("locks.db", autocommit=False, timeout=float(sys.argv[2]))
cur = con.cursor()
print(f"1st sleep: " + sys.argv[3])
time.sleep(int(sys.argv[3]))
cur.execute("CREATE TABLE IF NOT EXISTS locks (name TEXT PRIMARY KEY ON CONFLICT ROLLBACK) STRICT")
x = cur.execute("INSERT INTO locks (name) VALUES (?);", (sys.argv[1],) )
print(x.fetchall())
print(f"2nd sleep: " + sys.argv[4])
time.sleep(int(sys.argv[4]))
con.commit()
cur.close()
con.close()
- It uses
autocommit=False
, which usesDEFERRED
transactions, which - AFAIU - means that the transaction only starts until the first, access, i.e. theexecute()
that would initially create the table and thus only after the first sleep. - 4 args, first is the
name
value (which is a primary key and must be unique), second thetimeout=
of the connection, third is the sleep before the transaction starts, fourth the sleep right before thecommit()
,
My assumption would be that while the DB is locked because of a write transaction, any concurrent write transaction waits timeout=
before it aborts.
Now when I start the script twice (at the same time), first e.g. with:
$ ./lock.py 1st_a 5 0 10
second with:
$ ./lock.py 2nd_a 500 0 0
I'd expect the second to wait for 500s and as the first sleeps only 10s, it should succeed.
However it immediately aborts with:
sqlite3.OperationalError: database is locked
I've seen #124510, but the explanation there was about the case of upgrading read transactions to write transactions, so what’s written here doesn't apply, and in fact it seems to just work as I expect with the sqlite3
utility:
First invocation:
$ sqlite3 locks.db
sqlite> BEGIN DEFERRED;
sqlite> INSERT INTO locks (name) VALUES ('1st_b');
Second invocation (note that sqlite3
’s .timeout
uses milliseconds):
$ sqlite3 locks.db
sqlite> .timeout 10000
sqlite> INSERT INTO locks (name) VALUES ('2nd_b');
Doing this, the second one will block, until either the 10s have passed, or I do a COMMIT;
in the first.
Any ideas why that doesn't work in Python?
Thanks,
Chris.
CPython versions tested on:
3.13
Operating systems tested on:
Linux