argumentum Posted March 2, 2020 Share Posted March 2, 2020 (edited) I was thinking but I don't have the experience, so you may have the experience. I was thinking to chop a DB with 100.000 ( 20 columns ) in 10.000 DB chunks ( 10 DBs ) and query concurrently all of them to speed up a search, then add the results. Is that a sound idea ? Or will I run in trouble down the road. Should a DB better be keep in one piece ? The DB is now in MySQL. I wanna do all this chopping and use SQLite. Thanks [solved] Edited March 2, 2020 by argumentum [SOLVED] Follow the link to my code contribution ( and other things too ). FAQ - Please Read Before Posting. Link to comment Share on other sites More sharing options...
Earthshine Posted March 2, 2020 Share Posted March 2, 2020 Now I would look up everything I needed based on indexes and put them in temporary tables and do it that way My resources are limited. You must ask the right questions Link to comment Share on other sites More sharing options...
Nine Posted March 2, 2020 Share Posted March 2, 2020 Yes 100,000 rows is a quite small DB. Put your table in a memory SQLite DB. Recently we made it work with a 100k table, and response time for a single row was about 0.30 ms. Of course it will depends on how many rows you want to fetch each single query. You may need to add indexes to accelerate the query, but I would personally not divide the DB like you intend to. “They did not know it was impossible, so they did it” ― Mark Twain Spoiler Block all input without UAC Save/Retrieve Images to/from Text Monitor Management (VCP commands) Tool to search in text (au3) files Date Range Picker Virtual Desktop Manager Sudoku Game 2020 Overlapped Named Pipe IPC HotString 2.0 - Hot keys with string x64 Bitwise Operations Multi-keyboards HotKeySet Recursive Array Display Fast and simple WCD IPC Multiple Folders Selector Printer Manager GIF Animation (cached) Screen Scraping Multi-Threading Made Easy Link to comment Share on other sites More sharing options...
jchd Posted March 2, 2020 Share Posted March 2, 2020 Post the full schema of your DB and the queries you find slow. Posting a sample of rows would help fine-tunning. In general, sliptting a DB is almost always a very bad idea. This wonderful site allows debugging and testing regular expressions (many flavors available). An absolute must have in your bookmarks.Another excellent RegExp tutorial. Don't forget downloading your copy of up-to-date pcretest.exe and pcregrep.exe hereRegExp tutorial: enough to get startedPCRE v8.33 regexp documentation latest available release and currently implemented in AutoIt beta. SQLitespeed is another feature-rich premier SQLite manager (includes import/export). Well worth a try.SQLite Expert (freeware Personal Edition or payware Pro version) is a very useful SQLite database manager.An excellent eBook covering almost every aspect of SQLite3: a must-read for anyone doing serious work.SQL tutorial (covers "generic" SQL, but most of it applies to SQLite as well)A work-in-progress SQLite3 tutorial. Don't miss other LxyzTHW pages!SQLite official website with full documentation (may be newer than the SQLite library that comes standard with AutoIt) Link to comment Share on other sites More sharing options...
Earthshine Posted March 2, 2020 Share Posted March 2, 2020 (edited) right, you need to get the tables indexed properly and optimize your queries first and foremost i worked for an energy company with many millions of customers (every customer was a complex entity, and one customer could have many accounts)... they partitioned and segmented their DB for optimal performance across the organization. you should never need to if the db is designed properly for your needs. Edited March 2, 2020 by Earthshine My resources are limited. You must ask the right questions Link to comment Share on other sites More sharing options...
argumentum Posted March 2, 2020 Author Share Posted March 2, 2020 2 hours ago, Nine said: Recently we made it work with a 100k table, and response time for a single row was about 0.30 ms. 2 hours ago, jchd said: In general, sliptting a DB is almost always a very bad idea. 1 hour ago, Earthshine said: you need to get the tables indexed properly and optimize your queries first and foremost Ok, an overwhelming "don't spit the DB". That answers the question regarding a faster search. Reviewing the timings to continue this post, ...I'll have to go deeper in the PHP. The search takes 1~4 ms. to return the query, but the browser say that it took 1~3 sec. waiting (TTFB) . The transfer of the JSON (content download) took 80 ms. So I come to conclude the the "ArrayToJSON()" in PHP is the slowdown and is obviously unrelated to the DB search. A good question would then be: can I have the DB engine return the query as JSON ? and that I did not researched. As far as the OP, the question is clearly answer. Thanks y'all Follow the link to my code contribution ( and other things too ). FAQ - Please Read Before Posting. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now