Discussion:
Managing Large Data Models
(too old to reply)
Sean S
2013-01-29 15:04:16 UTC
Permalink
Our enterprise OLTP database has grown in excess of 1,000 tables. We have a relatively complex development environment with simultaneous coding on a variety of scheduled releases, with at least six distinct database environments.

We're using the PowerDesigner repository to manage all of these different versions of our data model, and thusly spending an inordinate amount of time checking models in an out of the repository and integrating from one environment to the next.

A simple check-in of the logical and physical models can take up to a half hour. Doing this several times a day can eat up a ton of time. I'm hoping there is a more efficient way to manage this process. Would anyone care to share their expeiences or suggestions?

Sean
Sean S
2013-02-15 15:37:18 UTC
Permalink
For the record, we've been doing some research and work on this problem and thing we've come up with the solution. By breaking the model up into packages, we have severely reduced the check-in and check-out times. The process has been painful, but hopefully it will all be worth it in the end.

Does anyone else out there use packages for their data models?

More later after we're finished...

Sean
Post by Sean S
Our enterprise OLTP database has grown in excess of 1,000 tables. We have a relatively complex development environment with simultaneous coding on a variety of scheduled releases, with at least six distinct database environments.
We're using the PowerDesigner repository to manage all of these different versions of our data model, and thusly spending an inordinate amount of time checking models in an out of the repository and integrating from one environment to the next.
A simple check-in of the logical and physical models can take up to a half hour. Doing this several times a day can eat up a ton of time. I'm hoping there is a more efficient way to manage this process. Would anyone care to share their expeiences or suggestions?
Sean
k***@gmail.com
2013-06-23 13:32:58 UTC
Permalink
Post by Sean S
Our enterprise OLTP database has grown in excess of 1,000 tables. We have a relatively complex development environment with simultaneous coding on a variety of scheduled releases, with at least six distinct database environments.
We're using the PowerDesigner repository to manage all of these different versions of our data model, and thusly spending an inordinate amount of time checking models in an out of the repository and integrating from one environment to the next.
A simple check-in of the logical and physical models can take up to a half hour. Doing this several times a day can eat up a ton of time. I'm hoping there is a more efficient way to manage this process. Would anyone care to share their expeiences or suggestions?
Sean
I was thinking a better approach would be to split the large models into several small models ( may be one for a combined set of packages/subject areas) and use symbols to reference entities from other model. This way,once in a while, if you load all related models in to your workspace, all the symbols get refreshed to the latest definition.

Instead of using the repository, thinking why can't we keep the models in a source control system. Any thoughts?
Sean S
2013-10-17 19:52:01 UTC
Permalink
Hi, Krishna. I hadn't been out here for a while and just now saw your post. Thanks for responding.

We tried splitting the model up into packages. We have many parts of the model that cross over from one area to the next quite frequently, and it just isn't very manageable, so we generally just check-in/out the entire model at a time. Sybase/SAS has done some work with caching to speed some chck-ins, but it really only works well with one architect working on a model at a time. So, we just got used to waiting.

Why don't we use our company's source control system?

If we use binary model data files, you can only overwrite the existing copy of a model. We have more than one person working on a model at a given time, so this isn't plausible. Also, our logical and physical models are > 15 MB each, so this would consume a large amount of disk space.

We thought about using XML model data files, as then our source control system could store just the diffs, but the merges are too confusing and prone to error.

So, we're consigned to using PowerDesigner's Repository and accompanying merge tool. It's slow, but it works.

Sean
Post by k***@gmail.com
Post by Sean S
Our enterprise OLTP database has grown in excess of 1,000 tables. We have a relatively complex development environment with simultaneous coding on a variety of scheduled releases, with at least six distinct database environments.
We're using the PowerDesigner repository to manage all of these different versions of our data model, and thusly spending an inordinate amount of time checking models in an out of the repository and integrating from one environment to the next.
A simple check-in of the logical and physical models can take up to a half hour. Doing this several times a day can eat up a ton of time. I'm hoping there is a more efficient way to manage this process. Would anyone care to share their expeiences or suggestions?
Sean
I was thinking a better approach would be to split the large models into several small models ( may be one for a combined set of packages/subject areas) and use symbols to reference entities from other model. This way,once in a while, if you load all related models in to your workspace, all the symbols get refreshed to the latest definition.
Instead of using the repository, thinking why can't we keep the models in a source control system. Any thoughts?
Continue reading on narkive:
Loading...