WARNING: This server is unstable and will be retired in the next days.
If you want to keep this forum available, please request immediately a migration
on the Nabble Support forum.
Forums that don't receive any migration request will be deleted forever.
I've noticed a similar thing happening. The first 1/3rd loads quickly; the
remain 2/3rds stagnates. It appears that there is some kind of bottleneck
happening. I thought it was the SAN.
My application begins a transaction, does all its inserts, and then
commits. There could be millions in the transaction. Would it be better
processing to commit in batches, say 250m or 500m?
Now's the time for me to make these changes, as the application is being
prep'd for production.
On Wed, Feb 8, 2012 at 4:29 PM, Simon Slavin <slavins@...> wrote:
> On 8 Feb 2012, at 10:22pm, Oliver Peters wrote:
> > It's the Primary Key that you're using cause for every INSERT it is
> checked if unix_time is already present in a record.
> > So the question is if you really need unix_time as a PK
> If you're batching your INSERTs up into transactions, try doing a VACUUM
> after each COMMIT.
> sqlite-users mailing list
> sqlite-users@... > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >