SQLite blobs have an absolute maximum size of 2GB and a default maximum size of 1GB.
An alternate approach to using blobs is to store the data in files and store the filename in the database. Doing so loses the ACID properties of SQLite.
If you want to insert a blob into a row, you previously needed to supply the entire blob in one go. To read just one byte also required retrieving the blob in its entirety. For example to insert a 100MB file you would have done:
largedata=open("largefile", "rb").read() cur.execute("insert into foo values(?)", (buffer(largedata),))
SQLite 3.5 allowed for incremental Blob I/O so you can read and write blobs in small amounts. You cannot change the size of a blob so you need to reserve space which you do through zeroblob which creates a blob of the specified size but full of zero bytes. For example you would reserve space for your 100MB one of these two ways:
cur.execute("insert into foo values(zeroblob(100000000))") cur.execute("insert into foo values(?), (apsw.zeroblob(100000000),))
This class is used for the second way. Once a blob exists in the database, you then use the
blobclass to read and write its contents.
length() → int¶
Size of zero blob in bytes.
See the example.
__enter__() → context¶
with connection.blobopen() as blob: blob.write("...") res=blob.read(1024)
__exit__() → False¶
Implements context manager in conjunction with
__enter__(). Any exception that happened in the with block is raised after closing the blob.
Closes the blob. Note that even if an error occurs the blob is still closed.
In some cases errors that technically occurred in the
write()routines may not be reported until close is called. Similarly errors that occurred in those methods (eg calling
write()on a read-only blob) may also be re-reported in
close(). (This behaviour is what the underlying SQLite APIs do - it is not APSW doing it.)
It is okay to call
force – Ignores any errors during close.
read([nbytes]) → bytes¶
Reads amount of data requested, or till end of file, whichever is earlier. Attempting to read beyond the end of the blob returns the empty string/bytes, in the same manner as end of file on normal file objects.
- Return type
(Python 2) string (Python 3) bytes
readinto(buffer[, offset=0, length=remaining-buffer]) → None¶
Reads from the blob into a buffer you have supplied. This method is useful if you already have a buffer like object that data is being assembled in, and avoids allocating results in
blob.read()and then copying into buffer.
buffer – A writable buffer like object. In Python 2.6 onwards there is a bytearray type that is very useful.
offset – The position to start writing into the buffer defaulting to the beginning.
length – How much of the blob to read. The default is the remaining space left in the buffer. Note that if there is more space available than blob left then you will get a
Change this blob object to point to a different row. It can be faster than closing an existing blob an opening a new one.
seek(offset[, whence=0]) → None¶
Changes current position to offset biased by whence.
offset – New position to seek to. Can be positive or negative number.
whence – Use 0 if offset is relative to the beginning of the blob, 1 if offset is relative to the current position, and 2 if offset is relative to the end of the blob.
ValueError – If the resulting offset is before the beginning (less than zero) or beyond the end of the blob.
tell() → int¶
Returns the current offset.
write(data) → None¶
Writes the data to the blob.
data – (Python 2) buffer or string. (Python 3) buffer or bytes.
TypeError – Wrong data type
ValueError – If the data would go beyond the end of the blob. You cannot increase the size of a blob by writing beyond the end. You need to use
zeroblobto set the desired size first when inserting the blob.