I don't think that this is due to database size. We have the same issue and have an SR for it. I personally think that SolidWorks doesn't know what is causing this. We do not have overly complex workflows and our database size is 14.4 GB.
We do not have overly complex workflows and our database size is 14.4 GB. We are experiencing all the same issues that you are. We also have good maintenance in place as well. I also have our database and filestore separated.
Please let me know if you get a solution on this. I am willing to try almost anything.
Its unfortunate but also good to know there are other companies putting in SRs for this, maybe something will get done. Hopefully this post will grab some traction in the forum.
We have a very strong database team at my company and they were saying it seems like the way EPDM interacts with the database is no where near optimal (i think he was saying most of the actions are done on tempdb). Its almost like Solidworks didn't take into account that after years of use the database will grow and the performance will continue to degrade. He was saying EPDM isn't as robust and doesn't function like other database based software that have much larger databases and function flawlessly.
As a company we aren't doing anything outside of the box with the workflows, we just have a lot of them and a lot of transitions which I guess creates more database traffic and increase the database size as it goes through the workflows.
Just curious what SP are you running? We are using 2016 SP5 for SW and PDM.
We are running 2017 SP3 right now. I have been at this since June with my VAR and SolidWorks. Just trying to get an update on progress is slow. SolidWorks did evaluate my database from 2016 as well as my current one. They didn't find anything in either. They had me add indexing to parts of the database, but that didn't help. Now it's just waiting on SolidWorks to do something. I am not going to be surprised if nothing changes until 2019. The amount of time our users have lost with these slowdowns has to now be astronomical. We are holding off on any new purchases until this gets fixed. Our electrical dept is not happy since SolidWorks Electrical is being held up.
We are having performance issues as well. have been running the ePDM database for 7 years or so now. We recently (within a year) implemented ECO, DCP and RFQ requests using the forms and workflows as well as notifications. In some cases the submitter can select who the notification goes to. The majority are distribution groups and some are mail enabled public folders to keep the QTY down on the ePDM notification side. I got SolidWorks involved and they analyzed the database. came back with npf and suggested I put the archive on another server. I don't want to dump money on this if it's not going to solve the problem. Our CPU pegs out at 100% at random times and for as long as a minute. seems to be random. when looking at the performance monitor it's all SQL related. when doing a SQL trace it shows notifications has a long wait time. I even noticed when editing workflows it will lock up the server. All deleted users have been removed from the workflows. I thought for sure that was it but I haven't seen too much of a change. I ran the query to see if any folders were over the 1000 file count and only 2 appeared.
ePDM 2017, Server 2008 R2, SQL 2008, QTY 2 Xeon E5649, 56gb.
A SQL Server update wouldn't hurt, your VAR can get you a copy of the latest version which is SQL Server 2014. The best way to figure out what's pegging your CPU is to run SQL Server Profiler and capture a bunch of traffic. To keep it reasonable you can try filtering on duration >= 1000 so it only shows queries that run longer than a second (click the duration column to bring up the filter):
Take long running queries from this to your VAR or SolidWorks to help diagnose what's causing the slowdown.
Sooo, I was thinking last night.. I have a handful of distribution lists that I pulled into ePDM by disabling the active directory integration, adding the accounts manually then re-enabling. I thought this would keep the majority of the notifications generated through ePDM to a minimum, but now I am wondering if it could be creating a performance problem. Anyone have theirs setup like this?
Any of the respondents using EPDM "Item" functionality? If so you should seriously into this as being your culprit.
My previous company uses Item extensively and has a tool that sales uses to grab document lists from the Items. We had a customer program that called the API search routines. These routines brought the search function to a crawl. I'm talking about 2-3 minutes to do queries against the Items. Our VAR's automation programmer was able to write custom search routine to "work around" the issue. As far as I know the bug has not been addressed- that was 5 years ago.
We aren't using the Item functionality for EPDM at this point. Item numbers are designated by a separate ERP system and file names, numbers are input manually into the Vault.
Searching is not as fast as we would like it to be either, mainly if there is limited search criteria inputted.
Out of curiosity, what kind of notifications do you have set up? In previous experiences, I've found that notifications (and the multiple groups/folders/users that are assigned to them) has appeared to me to be a big factor in performance.
Also, do you have the archive server service and the SQL server on separate machines? or are they on the same machine? Consolidation may not be an option, but improving any kind of communication speeds between the archive, and SQL may help too.
Have you run the vault status tool to get any kind of reports on the current health of your DB? Solidworks has this in the KB, your VAR should be able to get it and use it with you, it could provide some more insight.
We have a bunch of notifications set up with multiple groups and users, both through the 'send mail' task and we have a separate dynamic notification add in. If that is the case, I don't understand why they would offer these features if they would degrade performance that much.
Yes we have separate servers for Archive and DB servers. They are on the same rack but yes, we will not be able to consolidate them.
Our VAR did stop by on Friday and take a bunch of info, in collaboration with the IT department and database specialists. I'm still waiting to hear back on what/if they found anything unusual.
I've only seen significant performance decreases when a single transition would have (no joke) 20+ different types of notifications, and this would be on almost each transition.
I don't know too much about the back end, but I've speculated that the SQL tables end up getting huge because of notifications, because they are connected to the workflow, groups, users, AND folders, all at once, and they have to do queries against each one of those every time, and if you have it set to the root level of a vault, that means it's querying against every folder you have in the vault, to see if it needs to fire off a transition... or in most cases NOT fire off a transition.
Much like anything else on computers, the larger a single file, object, or in this case SQL table, gets, the longer everything takes to manipulate it. For example, think of any file that is 100MB, like a PDF, even that would take a long time to open simply because it's just a HUGE file. I speculate that SQL tables, eventually, will degrade simply because they're eternally getting bigger and bigger.
Why would they put a feature that degrades the performance? I think because on a smaller scale, it's a useful asset to the software. What happens when the file sizes start to grow? Everything slows down. How is this solved? That I don't have an answer to because I'm not with SWx or a SQL coder, but I think at this point they're really starting to see DB's of this size because the program has existed long enough to get to this stage.
Ok I digress, but I think you should ask your VAR about Solution Id: S-066526, it's the PDM Status Report Tool (they can search the SWx Knowledgebase for that) and it can give you some insight into the SQL health of your vault.
Call your VAR they are a wealth of knowledge on EPDM they know the SQL settings and performance differences in SQL.
They helped us recently and I'm at a billion dollar company. Your local SQL experts are not gong to know all the settings like your local VAR. If your VAR doesn't help I can tell you that Go-Engineer first hand knows a lot about the settings in SQL and Back-ups to help the overall performance of EPDM. Call them my best advice!
Thanks Dwayne, our VAR came in and collected a lot of data from us. I am still waiting to hear back on what the best course of action going forward will be. I will keep Go-Engineer in mind, although I know our VAR thoroughly went through our settings and everything looked as it should.
Just curious how many users do you have in your system, to what degree do you use PDM and what kind of hardware are you running on the database side?
Brad, can you give an example of slow? How long does a transition take?
I just restructured some files in our system because of performance issues. I've found that the number of files in a folder affects transition speed greatly. We had 15,000 file in one folder and it would take 30 seconds to display the contents and around the same amount of time to do transitions. Check-in times might've been affected as well although I don't remember exactly. I believe a refresh of folder on these actions were the issue when performing these actions.
We restructured our "many" files into more sub folders and try to keep the number of files in a folder below 1000. Transitions went from 30 seconds to 5 seconds and of course accessing the sub folder, particularly from the SolidWorks open dialogue is much faster.
We have 200 licenses, 600 total users.
DB MDF is 7.5gb, LDF 9gb
550,000 total documents (Documents table)
50,000 total folders (Projects table)
1,670,000 total versions (Revisions table)
16,500,000 total variables (VariableValue table)
PDM Pro 2017 sp3
Intresting data Jason and impressive user count.
Our PDM Pro has been around 7 months in production use for now. We have 25 licenses for now with around 30 unique users, less than 20 concurrent users per day for now since people are still finishing old projects outside of PDM.
Entering a folder with 800 files takes around 3 seconds at the moment. Changing between files when viewing data cards takes little under second, we have like 80 variables on datacards, files have around 50% variables fields filled typically.
So no problems yet. Looking forward hearing what awaits in the future for us. Some users are already complaining about "slow" PDM, since they are so used to working directly from the network drive...
Our DB MDF is around 600mb so far. With 13,000 unique files, haven't checked version count from SQL yet.
I am sure many of you know this but I haven't seen it mentioned yet; the amount of variable columns at folder views has a very large impact at least on opening speed of folders. We have 8 additional columns for all users for now, we used to have quite a bit more. The amount of columns vs time spent opening folder seemed go almost linearly hand in hand.
We are thinking of moving our PDM server to a dedicated disk array. At the moment it is using a shared array with 5 other virtual servers, ERP and other more lightweight servers. We are evaluating if we should go for SSD array or traditional platters, anyone have any insight if PDM can get benefit from SSDs if Server/Archive/DB are all on the same array? It would seem like SSDs could offer nice benefit due to all services and files being accessed from the same array for multiple users simultaneously. At the moment our server has 8 core Xeon at 2.0ghz with 16gb RAM. Seeing PDM maxing one CPU core for task changes and when opening big folders,which would suggest CPU is the bottleneck for us.
Once you go over about 2000 files, you start to really see a performance hit....and more columns will be slower.....just more data to pull. Web based systems get around this by only pulling 50 records or so....then loading more as you scroll down. Since PDM is integrated into Windows Explorer, there is no option to only load a few, it has to load them all.
We have 3 PDM servers, one for the database, and two replicated archive servers. 8 core Xeon E5-2470 @ 2.30ghz and 48gb of memory. The SQL database server has a dedicated RAID 1 drive array for the OS with a second partition for the DB logs files. Then a dedicated RAID 5 array for the DB files.
The servers are due for an upgrade so maybe SSDs will be option as I believe you will get better read/write performance on the DB server. The Archive servers though will probably remain SCSI 15k drives though due to cost of large SSDs.....we have 3tb of storage and are using a little more than half that.
Anyone have a query in EPDM to check our vault which folders are over 2000 files in each?
or is there an easier way....
select proj.Path, COUNT(1) as theCount from dbo.DocumentsInProjects docsInProjects join Projects proj on docsInProjects.ProjectID = proj.ProjectID group by proj.Path order by theCount DESC
This will give you a list of the folders and the count of files in them, in descending order. I have noticed some minor discrepancies that I doubt are worth tracking down, but if you are just looking to see which directories have more than 2000 files, it should be good enough.
Incidentally, our part numbering system is six digits, with no intelligence. Files are stored according to part number in groups of 1000, in folders with names that match the first 3 digits of the part number and then 'XXX'. So, directory 400XXX will have parts 400000 through 400999. This includes parts, assemblies and drawings. With this approach (and the 34000 part numbers we have created), our largest folder has 1860 files in it.
I'm wondering if this may be the issue, we have a few folders (with our released PDF documents) that have 10000+ files in them. These folders are really only accessed via search and if a file gets released a PDF will dump into that folder. Would this cause slow transition even for files that have nothing to do with these folders?
Also, we have been in this system for about 6.5 years at this point, still using the same vault.
your best measure if the performance is not good to ask the users.
If something is obviously not as fast as it was in the past that could be one reason why.... I'm at a large company with millions of files and they have done testing with the VAR and when more than a 1000 files per folder and performance is negatively affected in EPDM.
Our PDF folders are the same.....the largest has 8000 files in it. The users however don't browse into that structure, they search it and there doesn't seem to be an issue searching it or opening files from there. Files are added there automatically and also transitioned automatically so I don't think most users are feeling any pain here except for the two people who do a "release" in our system......which might explain some timeouts releasing large ECOs.
I just tested searching......takes about 7 seconds to search by drawing number to get results regardless of the folder the file is in. There are around 100 folders containing 120,000 PDF files. Caching and opening a PDF under 100kb is maybe 2 seconds so I don't think the number of files in a folder is affecting it. Transitions is one area I remember in the past being affected. I transitioned a file in the folder with 8000+ PDFs and it took around 10 seconds to transition. I searched on another file with 11 files in the folder and it transitions in 1 second so definitely still affecting transition speed.
Wanted to make a few points I haven't seen here yet,
- IOPS are king when it comes to database performance, SSDs are preferred these days
- Split your temp db into multiple files, this prevents contention
- Put the temp db on it's own drive, it's pretty chatty
- Performance reports are a great indicator of general problems
- SQL Server Profiler is great to for specific problems as you can grab individual queries and timings that you can send to your VAR to investigate
Great points Charley,
Another thing is to ensure that you are doing index maintenance.
If you want to learn more about the topic, I recommend the YouTube video series SQLPassion by Klaus Aschenbrenner .
He also has a good video on TempDB... (resolving latch contention)
Re: SSD's - I recommend reading this post on best practices... (below is an excerpt)
Although SSDs provide better performance than rotational HDDs, SSDs have a significantly shorter life cycle. I/O-intensive applications such as SQL Server will shorten the life of an SSD drive. In addition, the greater the percentage of the drive that's used, the shorter the SSD's lifespan will be. One recommendation is to make sure that your SSDs have at least 20 percent of the drive unallocated. Read performance is consistent over the entire life cycle of the drive. However, write performance degrades as the drives are used, which means the length of time it takes to perform write actions will increase. It's also important to know that you don't need to defragment SSDs because they don't access data the same way as HDDs. In fact, defragmenting SSDs will only serve to shorten their life cycle.
If you want to use SSDs, don't use single SSD drive implementations and plan on replacing the SSDs during the life cycle of your server. Some of the possible SQL Server implementations for SSDs include:
- Moving indexes to SSD. Typically, indexes aren't very large and they have a lot of random read activities, making them ideal candidates to move onto SSD drives.
- Moving data files to SSD. Data files usually have more read operations than write operations, so in most cases, they're well suited for SSDs.
- Moving log files to SSD. Log files are write-intensive. Therefore, if you use SSDs for log files, use enterprise-grade SSDs and a RAID 1 or RAID 10 mirrored implementation.
- Moving tempdb to SSD. Tempdb tends to have very high levels of random writes, which can lead to SSD burnout. Therefore, if you use SSDs for tempdb, use enterprise-grade SSDs, use a RAID 1 or RAID 10 mirrored implementation, and plan for the SSDs' replacement cycle. In addition, you might look into PCIe DRAM implementation for tempdb. DRAM storage provides higher levels of write performance and doesn't experience SSD burnout. However, DRAM storage can be expensive.
Okay - now, having said all that, these items are best practices and may not resolve a performance issue if it is application or query related.
For this, you must get your VAR involved and they will be asked to collect data (using Profiler and other tools).
Take a look at my post and see if some these things could apply to SolidWorks PDM.
Also, I don't know about the PDM side of things, but on my side we are currently working on something to make the SQL DB automatically index itself and defragment itself. Perhaps that is something that could be implemented into PDM if it isn't already?
Index maintenance can easily be done by creating a maintenance plan in SQL Server Management Studio.
I believe there is a solution for this already.
Seems like a common theme. We just had our VAR come onsite and perform an analysis, along with debugging our setup and other items beforehand. We have 112,000+ files/90GB of info, ~80 users, 4 physical sites with an Archive Server at each site, replication enabled, backups, etc. Some key takeaways that helped our performance recently:
- Make sure you only have one DataBase Service running. Over the years, somehow we had a 2nd instance running on an archive server. The DB service should only be running on the DB server.
- Files per fold are absolutely a factor in the performance of your clients and EPDM service. Noticeable slowdowns occur once you get over 1,000 files per folder when browsing folders, performing state changes, etc. Even SolidWorks performance with open/close/save times are impacted when any components are in large-listing folders.
- Make sure your replication settings are SIMPLE, and not replicating more than you need. On-demand replication works sufficiently, and having periodic or nightly replication can be nice, but is an overhead tax. Choose wisely.
- Make sure your servers, physical or virtual, have the proper SQL and other related settings configured to best support EPDM's specific needs. There are some unique settings that can make a difference here.
- Cold Storage - Disable this. It just causes headaches with multi-site replication, where versions are no longer available to replicate, and creates additional overhead tax on processing. (endless loops) We did have Cold Storage setup to keep the latest 20 versions and all marked with a revision, and delete all other versions. This was creating lots of strange behavior. Hard drives are cheap, IMO.
- Workflows - Keep them simple stupid. Yeah, there is potential to do a crap load of things in workflows, but honestly, the more complicated you make them the more headache it is to manage now and in the future. Only put into a workflow what is really needed. Use the 80/20 rule, not everything needs to be automated and part of a workflow.
- SQL software/process times must be on the low end, <25ms. If it is a shared SQL resource, be mindful that EPDM is picky and any lag in process time of SQL actions can result in group file processing on the EPDM side going bonkers, with workflow transitions not completing entirely on all files or files even becoming stuck in in-between states with automatic transitions.
- WiFi - Do. Not. Use. Plug it in, save yourself a headache.
- Run ping tests between clients and servers. Trace the routes. If the ping is too high or there are too many hops, fix it or be stuck with irregular and bad EPDM performance.
- If multiple sites, make sure your users are logging into the right/local archive server. You'd think this would seem obvious, but you'd be surprised how many times I've made someone's day by logging them into the right server and they are blown away by the speed of EDPM all of a sudden.
This is on the money, thank you for these tips... I'm honestly going to save quite a bit of this too.
In addition to workflows, a very good way to simplify them is to simplify/reduce the number of notification settings.
That's interesting about the replication settings, I wasn't aware those could affect performance as well. Good to know!
I just sent this note to our Solidworks "Support" folks. They are clueless of how to manage SQLServer.
We've had issues for years and I think we finally got to the bottom of it.
We followed your recommendations at the bottom (reindexing crap not shown here and suggested by others above) you sent us a while back and it did not solve the performance problem.
We did work around the problem by rebooting the server every night but that was a band aide to the real problem.
Well today the real problem reared it’s head. The OS drive ran out of diskspace (the database itself is on a separate drive).
After digging into it I realized that SQL had created a 90GB recovery Journal file for the database.
This Journal is kept since the last Backup of the database. The problem is SQLServer isn’t aware that we are backing up the system and just let the journal continue.
So this is like a couple years of journaling. That’s what was slowing it down.
We are using standard Microsoft tools to Backup but our server is running under a Virtual Machine and the entire virtual machine is backed up. It doesn’t know it’s being backed up.
So customers need to make sure their Backup is Informing SQLServer that it’s being backed up or shut this “feature” (Recoverable since last backup) off.
The Journals are kept here and should not be excessively large
c:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\*.LDF
These are the steps to DISABLE a Recovery (since backup) Journal.
In SQL Manager:
1. Right click database
2. Choose Properties
3. Click on the 'Options' tab.
4. Set recovery model to SIMPLE
5. Next, choose the FILES tab
6. Now make sure you select the LOG file and scroll right. Under the "Autogrowth" heading click the dots ....
7. Then disable Autogrowth (This is optional and will limit additional growth)
8. Then click OK and set the "Initial Size" to the size you wish to have (I set mine to 20MB)
9. Click OK to save changes
10. Then right-click the DB again, and choose "Tasks > Shrink > Database", press OK.
11. Now compare your file sizes!:)
For folks that are running "good" with large databases it's because you are using a backup procedure that informs SQLServer that the backup is done on a regular basis or you disabled the recovery journal as shown above. Unless you are backing up the Journal every minute it's nearly useless.
This discussion has been the most useful information I have found on database performance with PDM and SolidWorks -- thanks everyone!
I cannot find any data regarding the impact of the number and type of variables in PDM and their use on data cards. Basically, given that data has to be on a data card for it to be searchable and available for use on drawings, we've created a decent number of them, including some that hold workflow status etc to assist with management. All-in-all, there are about 170 edit fields on a data card. We are experiencing poor performance opening large models and some comments have been made that it's because we have too many variables. However, no-one can say how many is a limit, how to measure the impact objectively, or whether referencing the database can be controlled.
Any thoughts on this would be greatly appreciated.
If you suspect SQL is the source of performance issues then you should run a SQL Profiler trace on the issue and look for long duration queries. If you don't see any (think anything over 1 second) then odds are something else is causing you issues. Process Monitor is also a great tool for tracking down performance issues, but it requires looking for large gaps between events to find problems. Both of those tools are a little tricky to learn but they'll make you a far more effective troubleshooter for Windows/EPDM issues.
Also if your problem is with file open times make sure there aren't issues with old version files by running Performance Evaluation after it's open, if there are any files it has to update to the latest version of SolidWorks on then that can easily 10x open times. Normally this would fix itself but in EPDM you have to be conscious of it because you need to check the files out to correct it, which can be tricky (already checked out on another computer, released, etc.)
If you are mapping that many variables to your SolidWorks model's custom properties, then I could see that being a performance issue. If its just data card variable with little mapping, then it should not affect your SolidWorks performance. We have an ECO data card with nearly a 100 variables and it displays the card in about 2 seconds.
The PDM add-in in SolidWorks will affect performance some as it must query the database periodically. The easy way to test this is to simply disable it and then open models and time different operations. Turn it back and time those same operations. Latency to the DB and Archive servers can be an issue for this tool. It's better than it was in older versions but unstable latency issues will have your users knocking on your door, sometimes with their monitor .
What kind of ping times do you get to your PDM DB server and Archive server?
Thanks for your response. I'll get ping times.
I do have a number of version free variables and do remove the custom properties where they are not used. However, I'm not clear how the difference would impact performance on loading a model into SolidWorks.
Is there a way we can set SW to open the files and return control to the UI while continuing to execute data queries in the background. Perhaps the querying activity can even be made to wait until an object obtains focus.
You could ask for that as an enhancement but at the moment it freezes SolidWorks while it updates. I would first check with the add-in turned and see how it compares. You can also go to the PDM options in SolidWorks and adjust when the updates to the PDM tree occur
Yeah....we uncheck the first two "Auto tree" updates. Helped some in the past, especially before they made some other changes in older versions of PDM add-in where every right click would query the DB. You do have to refresh the PDM tree manually if you want it to show correctly after adding/removing components.
I have been meaning to jump on here to let you guys know that our performance issues have been resolved! It was like peeling an onion, as we made changes more and more issues were exposed. This took almost a year to resolve. Everyone's configurations are different so it's hard to pinpoint a performance issue. Here is a list of what I believe helped in some way, maybe a little at a time, but all together it worked for us. Hope this helps you guys too!
- Upgraded to 2017 SP5 (SP2 had issues)
- Cleansed deleted email addresses/users from the notifications in the workflows
- Disabled large send offload on the server NIC
- Cleansed folders in ePDM with more than 1000 files in them
- Cleared old email address with deleted accounts in SQL
- Removed user created notifications and disabled the feature. (we found users were creating notifications on the root level)
- Disabled show inherent values in ePDM (user group explorer settings)
- SW identified an issue with checking for duplicate files in the vault during the check in process. they released a script with new indexing
- SW release an additional index on the folder view
On the client machine
- Upgrade to 2017 SP5
- Set virtual ram manually (if applicable & space available)
- Upgrade video drivers to latest release – use SolidWorks RX tool, in start menu or systray
- Rename old/previous SolidWorks folder references in the registry
- Set Windows power management scheme to high performance (eliminate CPU downgrade)
- Disable automatic population of the View Palette Tools > Options > System Options > Drawings > clear the “Automatically populate View Palette with views” checkbox.
- System Options under General, make sure Show thumbnail graphics in Windows Explorer is unchecked
- Verify reference folders are cleared System Options>File Locations>Referenced Documents
- System Options – External References. System Options>External References>Uncheck "Include sub-folders" under "Search external references in
- System Options > Performance> Slide the level of detail to the right. Uncheck high quality for normal and dynamic
- Clear temp files - SolidWorks RX tool maintenance feature, in start menu or systray
After a year of having PDM it started running slow for us, it would take forever to access a datacard and going into certain folders. But then someone recommended rebuilding the index cards in the SQL server and it fixed our issue, it's now part of my weekly SQL maintenance. I haven't had a problem since.
I also keep our database and files on SolidState Drives and access them through a gigabit ethernet.
Have you tried defragmenting the registry of the SQL database? I don't use PDM but I use another program which does similar functionnalities and more and it also works with a SQL database. After 4 years of working on SolidWorks, the database was getting pretty heavy and some assemblies even timed out because the SQL had too much to search through. We weren't able to mass-print a job or to do pack and go's.
After defragmenting, everything fixed itself. The response rate is much faster too. These are some of the problems my IT guy highlighted for me.
- Some tables had well over 1 million lines. Makes tasks much longer to execute, also the SQL indexes itself automatically but it requires to be modified at least 20% at the table level before reindexing itself. It makes up for quite a hefty task when it does require to do so.
- Some codes were inconsitent, here are the codes that were, in case some were wondering. Maybe it can help SW troubleshoot their problem..?
- (@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000))SELECT
- 'Server[@Name=' + quotename(CAST(
- sys.tables AS tbl
- INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tbl.object_id)
- (tbl.name=@_msparam_2 and SCHEMA_NAME(tbl.schema_id)=@_msparam_3)
- Some codes could be optimised, here they are:
- (@session_id int)EXEC ct.session_update_last_activity @session_id
- SELECT ct.get_session_user(@session_id)
We also have big performance problems with our database (~5 GB, 13 users).
Does anyone of you use virtual components in SW? I recognized, that even those virtual components have an entry e.g. in the table dbo.Documents of our vault database. And maybe somewhere else.
We use virtual components, when we use imported assemblies from suppliers, and not to have an PDM entry for every part of those assemblies. We don't save the assemblies as parts, because we preserve the possibilty to extract the parts later, e.g for spare part issues.
We regularly use Virtual parts and subassemblies without any adverse issues. During Check In/Out dialogs the virtual files are shown in the File Lists, but are grayed out lightly and are associated to an upper assembly. Since the "location" of the virtual file is within the assembly file, there is not any impact in terms of PDM performance. I'm a big fan of using virtual parts, because as you said, it is not a separate file in PDM to manage, assign a part number, etc.
Scott Wheeler schrieb:
[..] Since the "location" of the virtual file is within the assembly file, there is not any impact in terms of PDM performance.[..]
That's what I thought, too, until I found the virtual files in the database table "dbo.Documents". So I think it HAS an impact.
Hmmm... Sounds like it then. Now that I think about it, I have actually seen Check In's held up where a virtual part's reference file was not in the Vault. It was a regular part made virtual and the part previously had a Simulation result file that was removed prior to the Check In of the assembly file. So, PDM is actively managing relations of virtual components, more than just recognizing their parent assembly.
Jim, yes we have a maintenance plan. It covers 'Reorganize Index', 'Rebuild Index', 'Backup Database (Full)' and 'Maintenance Cleanup Task'. Shall we do any more?
The plan I use, which comes from the book mentioned in my previous post, consists of three parts:
Daily Full Backup:
Hourly Log Backup:
If there is a particular action the is slow on a consistent basis, you should run a SQL Server trace. It will tell you which stored procedure is taking the most time. Once you have the procedure name, you can enter it in the Knowledge Base to see if there are any recommended actions to improve performance.
Lots of great info on SQL Db maitenance, etc...
SOLIDWORKS Resellers are great, but also consider bringing in an actually SQL certified DBA. That's what they do.
17GB is pretty big for a PDM SQL Db but in the grand scheme of things, there are tons of applications where SQL Dbs can get into the hundreds of GBs.
When addressing 'performance' issues, conduct ping tests between clients and the archive/database servers themselves. Latency is a HUGE deal. I've seen 140ms of latency (equivalent to dial up speeds) result in a 8 second delay between when a user right clicks a file and when the menu is displayed. That gets even further exacerbated when the file is a drawing/assembly with tons of references.