> I'm really pleased with kst2, and the way it can plot several
> hundreds of datasets with ease.
Nice to see you appreciate kst :-) Performance with large datasets is one of Kst's strengths.
> Now I'm facing a csv file which has
> about 27.000 columns and 22 rows. It starts computing, but in the
> task-manager i can see that it kind of 'stops' at 650mb of ram
> consumption and at 100% CPU usage.
> Is there a way of increasing the memory dedicated to the plotting
> process? Or is there a general workaround for such huge dimensions
> of data?
There is no way to increase the memory dedicated to plotting. Memory is allocated dynamically and theoretically you should be able to use more or less all the memory available on your system.
That said, I don't really understand your issues. I have myself already used Kst successfully to plot much bigger amounts of data from a .csv file (25 vectors with 3 million points each, plus histograms, FFTs etc so in total probably more than 100 million values) on a pretty standard PC. 27000 variables * 22 samples amount to 594000 values, which is actually very little in terms of RAM consumption on a modern system (count 8 bytes per value, so around 5 MB). But my cases are rather few vectors with many values each. Maybe we have a bug when there are many vectors? Or you could have a problem in your file somewhere on which Kst chokes.
The best way to be able to debug this would be to have access to your file. If you zip it with 7zip for instance, the size should be dramatically reduced. If you could then upload it somewhere where we can access it, we'd surely have a look. If you don't have any server to upload to, google for "free file upload service".
Thanks for your interest, hoping we can fix your issue quickly.