About 18 months ago I arrived at an assignment to put the foundations in place for an EA Program in a Public Sector enterprise. This program was at its very earliest stages with some isolated will to do something to develop a more coordinated view of the enterprise, AND a pre-determined decision on a tool (PowerDesigner, of course).
I arrived having done a good amount of work on EA related activities over the past several years and having only passing familiarity with PD from having used it for a few conceptual data models about 5 years previously.
This post comes from desire to:
- Take a bit of time out of a day in which I have nothing but drudge work in front of me! I somehow woke up thinking that avoidance-in-the-hope-that-it-all-goes-away had become a viable option.
- Share some of my experiences in the hope that someone starting now might benefit from my stops and starts along the way
First, some overall observations, (PD related):
- PD is a very powerful tool and ultimately most problems can be solved.
- About 10 (100 ?? 1000???) person-years of effort are required to do more about telling us ‘how’ problems can be solved. The documentation is an ok starting point but there is a dearth of information and samples that someone starting out would hope for.
- Unless you are a data or information modeller, expect help and previous experience significantly harder to locate in the user and support communities.
- PD, while an excellent product and framework for model development, is significantly less mature in the higher level EA space than some of the specialized products. Architecture domains like Data and Application are quite rich – the closer you get to Business architecture domain, the less rich the out-of-box features and functionality are.
- That being said, the extension/customization framework allows you compensate if you have the time and perseverance.
Some overall observations, (task related):
- Since I was virtually new to PD and certainly new to this enterprise, some time needed to be spent getting the lay of the land. We then launched into the project and the following activities have proved invaluable in setting the foundation for all subsequent activities (the bolded bits are my feeble attempt at boiling this down to a couple recommendations ! )
- Things called “EA’ and ‘EA Programs” vary wildly in definition from place to place. First, figure out what EA means to the organization and formally define what problems need solving in the short to mid-term (say a 3 year horizon as an example). Turns out that EA (in this enterprise) was less about encompassing all architecture domains and more about getting a better handle on the more technical disciplines. Even then, it was less about capturing what a developer would hope for and more about capturing what a solutions architect or technically oriented manager would hope for. That being said, retaining flexibility to move into a more all-encompassing EA treatment was also important past the short/mid term.
- Decide on a framework that fits the situation. In our case, at the application level, UML-type models and diagramming were pretty much universally understood. At the higher levels of abstraction, almost nothing existed – certainly very little from an EA perspective. There was however a stated desire ‘to align with TOGAF’ without any real understanding of what that meant. We decided on TOGAF to model down to the level of identifying individual applications, or data stores, or infrastructure components (e.g. a network). More detailed design modelling for any of these would use established UML constructs and the models could be linked to allow navigation from the very highest levels to the lowest (if the documentation exists, but that is another story !).
- Take the time to capture your framework somewhat formally. This turned out to be my first task in PD. I used out-of-box model types to capture the framework and metamodel that we would use for our EA. While decisions on the overall standard had been made, every program adapts the generic framework to meet their needs. In our case, this meant adaptation of the TOGAF metamodel to create an abridged version that eliminated the elements that would likely never be used in our EA. It also meant taking a first stab at defining the metadata for framework objects that would be useful in meeting the goals defined for the short/mid term. TOGAF has a starter set but that is what it is – a starter set. This has subsequently proved to be time-very-well-spent because it formed the basis of:
- Some better definition of goals and greatly contributed to Program planning efforts
- Formalization of the framework (and some reference models) that would form the basis of our information architecture for EA. This also proved to be handy later on to help ensure we were not too cavalier about changing the metamodel without thinking about the implications.
- Definition of the metadata framework associated with the different objects essentially ended up being the bulk of the specifications for PD extensions that would have to be developed.
- As a by-product, I had some handy Program documentation that did not need to be created later on in the process.
- With the prep work done, then focus on the tool(s). With some good thought on goals and framework and alignment of the two, we began the job setting up our EA framework and repository. This involved:
- Working with PD to define the folder and security structures (access, roles, etc.) for our repository. This will end up getting revisited every quarter or so in the first year as specific needs become clearer – but, you gotta start somewhere!
- SAP-Sybase tells us that PD ‘can support multiple frameworks such as DODAF or TOGAF’ – it does not do so out-of-box, even in their most generic form. I assume/hope that will change down the road as the EA bits of their product become more mature. In any case, most organizations will still need to take the generic and make it more specific to their needs. This meant a significant effort to create and test the extensions that implemented our previously defined EA metamodel.
Some closing random observations and additional info:
- Getting some help initially. I was experienced in the EA domain but not the tool. Formal training is not really all that available. My personal style leans more towards learning by doing anyway so we ended up finding someone very experienced with the tool and knowledgeable in the EA space to mentor us during the process of setting up our PD installation and the definition/implementation of our metamodel. This ended up being a part time arrangement over a couple months where there was short sustained bursts of working together to accomplish a goal, followed by longer periods of ad hoc support as we extended our learnings ourselves. This proved to be a very efficient allocation of resources.
- Expect to need to extend PD. Because PD is a bit immature in this space, you can expect that extensions will have to be created. These will definitely go beyond the basic “define some extra metadata” type of extension. After the initial activities referred to above, we have set about ensuring that the 100(ish) business systems here are captured at someconsistent level in our EA. Along the way we have:
- As mentioned above, implemented the metamodel – this meant defining new objects to model with (i.e. TOGAF ‘things’ not present in PD out-of-box), defining the attributes of the objects in our metamodel, and defining the object relationships that we wished to capture.
- We chose to use replicates, not shortcuts for common objects used all over the place. We ran into an interesting bug where extended attributes don’t replicate in an EA model – developed a utility script to address that. We ended up developing a few such scripts to deal with things like being better able to track relationships from child to parent, and specialized analysis/reporting scripts to identify ‘orphaned’ objects.
- For the EA Program, the portal ends up being an important component to us. This was from two perspectives. First, the PD repository information dissemination component that it is was useful. Second, and more important ultimately for us, was that for EA work some of the information and work will be outside PD. This in contrast with a data/information architecture project where both the practitioners and the product are more suited to perform the work almost entirely within the PD ecosystem. The portal is proving to be a key integration point for service delivery and we have made a number of extensions to include resources outside PD and have extended the core to do things like being able to download attached files (go to the portal and try to download an embedded Word file – doesn’t happen out-of-box).
- Lastly, the reporting engine ends up being important to create the management level views for business primes that are unlikely to poke around the repository until the find what they need. This was a frustrating exercise at the start but it gets better. Understand what the reporting engine can and cannot do and morph your needs around capabilities. Also, getting to know the PD metamodel at some surface level really helps understand what the reporting engine is likely to do for you.
As stated at the start of this post – this was partly an exercise to procrastinate on this day and avoid some pretty boring work that I have to get to. All this from a neophyte PD practitioner that is becoming mid-level proficient by now – hopefully some of this is useful to someone working in the EA space. If any of the things we did are of interest, we are quite open to sharing – might even post some of the more interesting bits like the ‘downloading files from the portal’ extensions.
Last point – as I look back on this post – some comments could cause the reader to think I was negative about PD overall. That is not the case, I actually like the product but it has its strengths and weaknesses like any other. In its overall evolution, the EA space is a relatively recent addition so naturally there may end up being more challenges along the way.
Now, back to the day job