Has anyone had any experience with this on a single site environment to speed up end user performance using a primary server for write transactions, and secondary servers for read only operations?
Would love to hear your thoughts.
If you're a single site, how many users do you have hitting your servers? I've only encountered database replication for multi site needs.
Sounds like he is wanting to distribute the load in a single site. Replicate the database and direct all users to it for read operations. If the database is truly overloaded I could see it working although I'm not sure what other issues may arise such as out of date date not yet replicated.
John, how many users do you have and do you know for sure its overloaded?
From my understanding each user is connected both to the read/write database and read only database but I'm not exactly sure how it works. The reason I want to try it is because the database consistently runs into lock wait time issues, therefore transactions become suspended, on the users side things like searches hang, checking in or transitioning files hangs, simply right clicking in explorer takes a long time it's just not a good experience and is very unproductive.
How many users do you have hitting the system at one time? Hoe many searching versus how many editing (checkin, transition, etc.)
We have over 500 users with many working at the same time. There are a few bugs here and there will deadlock the DB or really slow it down but not frequently.
No where near that maybe 15% of that hitting the system constantly which can be seen in SSMS Performance Monitor, but the tranactions/min can be anywhere over 50,000 - not sure if that is normal behaviour.
Would you know why something as simple as right clicking in the Vault might take 3-4 seconds?
On average CPU utilisation is stable around 2-8% and we have 64gb of ram on the server to be increased to 128gb which I hope is sufficient
Have you done: 1) index maintenance and 2) creating additional tempdb files?
Thanks for the suggestion, we have a SQL Maintenance plan in place to rebuild the index and shrink the transaction log every night but didn't think of creating additional tempdb files, is there any guide on how to create more tempdb files? Also do you think this will increase performance?
Charley Saint posted a link in this thread.... PDM SQL DB Size Leading to Slow Performance
Before, doing that, make sure you have a backup...
Have you opened a service request with your VAR? ...if this has been going on awhile, it might be a good idea to run profiler and capture a trace.
Speak my name and I appear!
One of these days I'll take the time to make a really in depth guide to SQL performance tuning, but today's not that day...
Other things to check not in that thread:
File fragmentation of mdf files, it's still just a file on a disk, and don't let auto-growth set the size if you can help it, it's a good plan B but degrades performance
Setting allocation unit on disk to 64k, requires disk reformat
Check your recovery model, if it's not simple and you're not doing full back ups the shrink transaction log won't do anything (shrink removes empty space, transactions don't get removed from the logs until backups remove them)
Thanks Charley, I'll take a good look into this
Also do you think switching to Enterprise SSD's would help at all on the database server? One for the Vault Database and a separate for the log file?
In regards to your SQL Performance Tuning guide, for those who don't know there's a really good presentation on the topic from SOLIDWORKS World 2015!
Ah SWW 2015, that was a good year. Pretty good info in that presentation but probably needs some hands on how to for the non-sysadmin types.
I would highly recommend SSD's whenever they're an option, though putting an ldf on it's own disk may be overkill for EPDM as it's only used for data writes and probably wouldn't saturate a single ssd to have mdf/ldf on the same disk. Tempdb is a different story, you'll see a lot of improvement by moving it off to it's own disk and splitting into multiple files. RAID on SSD is recommended too as data corruption happens.
Also take some time to familiarize yourself with SQL Server Profiler and how to track duration of queries based on actions in EPDM (like how long to generate a right click menu which involves writing a bunch of stuff to tempdb and running Exec Mnu_GetMenuInfo2). If that runs in 600ms and it takes 3 seconds to generate the right click menu the best you can hope for is to get that down to 2.4s. No amount of SQL tuning can help you if the problems not with the SQL server.
Also RMB menus have a lot of local actions based on what's installed as well, take a look at Autoruns and see all the ContextMenuHandlers under the explorer tab that are also executed on an RMB click that can drag that down and even crash EPDM in some versions (I'm looking at you notepad++)
I have a client who replicated their db from the USA to the Netherlands as a pilot project to compare results with when they were only replicating their archive server to the Netherlands. The Netherlands facility experienced such an increase in performance in read operations, they claimed they were onsite.
They have 120 PDM CAD Editors and numerous viewers being shared by 1200 users. It was a significant performance increase.
Get PDM answers now on our Live Chat!
By the way, you do know that you have to be running SQL Server Enterprise edition, not the Standard that comes with PDM Pro. It's not a cheap piece of software to step into, so make sure you're factoring that into your equations.
Retrieving data ...