Tuesday, August 18, 2009

Atomic Use Cases - Fission or Fusion

With the advent of Pega's DCO features has come the rise of the term "atomic use case.” As with so many terms which become popularized "atomic use case" means many things to many people.

The question is: What does it mean to you?

Well, for starters, it means you've got to formulate your opinion of what the term means or you're going to be powerless to participate in any DCO-enabled projects. You CAN sit there and nod sagely without a clue. You can make up your own meaning. You can do some research on the internet on these and other UML terms. These are but some of your choices, but so far none of them puts you in the position of participating in a valuable capacity on your client project.

Let's start with "use case.”

Lots and lots of folks have talked about use cases forever. Interestingly, it turns out that this terminology is as old as Pega. It goes back to Ivar Jacobson's work in UML back in 1986. http://en.wikipedia.org/wiki/Use_case

In 1986 Pega was architected using the bold choice of PL/1 in VMS and MVS/CICS environments at that moment in time. The PegaSYSTEM was hardly object-oriented at that moment. In point of fact, as the Wikipedia article points out, use cases themselves aren't necessarily object-oriented although they've originated in the object-oriented community. They are, in fact, by definition procedural because they describe a process in time order.

Use cases detail a sequence of events or actions initiated by an actor in order to achieve a goal. Yes, it can be one or more actors, so long as they are set on achieving the same goal. For example, someone in a retail store could help a customer fill out an online application for a store credit card. Via the web, that same customer could do the same thing themselves. The goal is to complete the application and send it off for processing/approval. So in this case we have two actors who will use the system to achieve the same goal – the same usage.

In our world, actors can be people or other systems. For example, in a SmartPAYMENTS setting, it could be a Swift, CHIPS (the kind without Ponch and Jon), or FEDWire message that kicks off the creation of a new case.

A use case is going to capture a sequence of actions or events then. You can see how this would be a good way to capture requirements around various system features and functions. You learn who does what and in what order. You should uncover the decisions they make as they go. This process will start to get you into the DNA of the system and the business functions it must enable.

Done properly you will capture a lot of the requirements, at least the functional ones. Note: You will need something else to help catalogue your non-functional requirements – things such as, "The system must be available 24 hours per day, 7 days a week, 365 days a year."

So why atomic use cases?

Well, frankly use cases come in all shapes and sizes and lengths. It's easy to get lost in all the information you've gathered, and it's even harder to know what you don't know yet.

If a use case can have an unlimited number of steps, then it can be as hard to break down, analyze, design, and build, as any other form of requirements-based documentation.

If a use case can be at any level of detail, the overall project sizing, design, and build can be compromised. In some cases you'll have every little mouse click. In others you'll be missing entire screens and critical interfaces. How do you size and manage that reliably? How do you know you've accounted for what you don't know? How do you cope with resourcing? One developer per use case? Two?

The industry has started to popularize the term "atomic use case" of late, as people recognize that the devil is indeed in the details.

Breaking use cased down into their smallest size also enables us to break them into their most reusable, most common elements. From there sizing, planning, designing, and building become much more predictable. With predictability comes reliability and common practices for risk mitigation, communication, etc.

So atomic refers to sizing. Is that it for Pega and DCO?

Pega doesn't require only "size" to define or limit a use case. There are other criteria which drive the definition or identification of an atomic use case.

An atomic use case:

  • Specifies one or more actors
  • Specifies a single event or method that triggers it
  • Does not involve a change of ownership during processing
  • Corresponds to one particular step or series of steps within a screen flow or a single flow action in the processing of a work type
  • Describes the processes to be performed including the steps involved in completing the use case, applicable edits, and the expected behaviors and outcomes
  • Provides enough business detail so that a developer can implement it
  • Should only take a few minutes to enter into the PRPC Application Profiler

Think again about reusability, about tracking, reporting on your design and build progress in a project. You have people who need to track how much is built to support the stated requirements, as well as how much is tested and ready for promotion to production. And then there are milestones and invoicing as that project progresses.... You literally need to know what requirements are "built out" and "testable" and "delivered" and which are not.

Pega's DCO features enable you to create, edit, and maintain (yes even delete, but careful as always with this) atomic use cases. The intention is to connect them to the actual rules in the application that manifest and enable them.

Atomic use cases are referenced by:

  • Flow Rules
  • Activities
  • Local Actions and Flow Actions
  • "New" Harnesses

As such, it does you little good if you have one gigantic use case referenced to every single "New" Harness and Activity and Flow Rule you've created for the 250+ features you're building. What you must do is limit the size and scope of the use case so that you truly know whether it's been built, and whether it can be tested, and whether it's ready for delivery/promotion to Production.

So, it's fission? How do I know when I'm there?

You need to know the scope of your engagement. Whether you're working on an Agile or waterfall project, scope is critical. From there, you need to work the business process flows out at a high enough level that you can identify connections, commonalities, and at least some of the actors.

If you're working within a solution framework, then a lot of the groundwork is already done for you. Generate the use case catalogue for your framework. Generate the list of actors, etc. already accounted for by the framework. Work with your client to identify the differences. Identify what's not accounted for, what must be built custom to support the client. Zero in on these areas.

Your client can also help you cross use cases and actors off the list, where the framework is not applicable to the current project. Don't delete them. Just set those aside for now. Keep moving.

Whether you're working with a framework or a custom solution, start mapping out the business processes (diagrams of any kind, especially vanilla flowcharts can be of use here). Work with your SMEs to drill into enough detail that you can tell the difference between a large, complex process, and a smaller, more straight-forward one.

As the detail emerges, start applying the guidelines Pega has given for atomic use cases. For example, make sure your work object doesn't transfer to a new party or system for further processing. If it does, your use case stops and a new one starts. If your process requires that you make multiple interface calls to gather data, then each one of those calls should be its own use case.

Be sure and think about your functional design tools here, including flowcharts and other pictorial representations of processes. http://www.knowledgerules.com/blogs/life/2009/08/functional-design-picture-is-worth-1k_757.html It may be easier to identify use cases by reviewing diagrams and marking boundaries between handoffs. It’s easier to keep consistent with sizing in this fashion too.

Make sure to engage your LSA in this process. He or she will be able to apply not only a keen business eye but a keen technical eye to the use cases you're discovering. Your LSA may be able to find further places to break business use cases into smaller pieces.

Educate your client at all levels of the project as to why you need to get to this level of detail, and the risks you run where this level cannot be obtained. On tight deadlines, lack of detail in the requirements compounds the work for design, build, and test. Lack of detail also adds risk to each stage of the project, where risk can manifest as additional time, money, and resources.

You will experience a time period where more and more use cases are identified. That's ok. Keep listing them. Prioritize. Not all have to be fleshed out immediately and you'll even find some may be outside the scope of the current project (especially if you're working with slivers). Set those aside, keep moving on the critical path.

It's also fusion.

Be especially sensitive to things that are common. It could be that a customer accesses a system over and over again, picking up one data element and then another during data gathering. It could be that because the access is really the same, but the property you're picking up is different that you're really in the same use case. All that needs happen is to parameterize the call.

Work with your LSA especially to be sure that you don't unnecessarily fragment your use cases, where you have several that are all alike except for one thing.

Also, be sure that once your atomic use cases are all identified that you can build back up from them to the business process you're enabling. If you can't, something is missing. Find it. Address it.

Again working from process maps, not just heavy text documents should help you perform this "reverse check.”

Is that all?

Identify your atomic use cases by name and get them as fleshed out as possible so you can move on to the next steps. With DCO and the Application Profiler, you can identify your use cases as you find them – just name them. You can then associate your steps, requirements, actors, and complexity. When the profile is generated, DCO will yield a sizing with which you can work to tune the rest of the stages of your implementation.

That's not the only time you can create or add atomic use cases, though. Once the application ruleset has been created and people are working away, you can always come back and add new use cases directly into PRPC and associate them to the rules which bring them to life.

Doing this improves traceability not only for requirements but at a project management level. You can continually generate your DCO documents to show the build against the requirements and design. This is an extremely useful tool in your arsenal when managing scope and relationship dynamics. A simple mouse click can enable you to list the use cases and the application rules necessitated by them. Whether in spreadsheet form or Word Document, this can be very insightful, and a great tool to communicate progress, risk, and start the conversations to close any gaps.

Atomic use cases are the building blocks in this puzzle, no doubt. The consulting skills it takes to identify, analyze, and catalogue them are the critical tools you need to create them and a solid foundation for your project.

Labels: , , , ,

Functional Design - A Picture IS Worth 1K Words

“It’s the requirements, Stupid.” Let’s face it. Projects get obsessed, enmeshed, mired, off-track, and so many other things so early on....

Using Agile or Waterfall approaches, everyone’s trying to get to the end faster, and with hopefully better results. The thing is that many folks arrive at a deadline and then discover that they had differing opinions on what was supposed to be delivered and whether something is complete, or how complete it is. The billing disputes arise even as testing kicks off, solidifying and entrenching these expectation differences, threatening the success of the phase, the project, and the relationships.

Often all sides go back to the “requirements” documents and “design” documents to compare against what’s been delivered. The purpose:

  • What’s in scope and what’s out of scope – hey, what’s “scope” go to do with it anyway?
  • What’s a change request – do we have change requests? Where are those little buggers?
  • What’s a bug and what’s an enhancement – does this even matter?
  • What’s a missed requirement – can I say it’s your fault?
  • Determine whether the expectations as documented have been met or could be construed to have been met – time to bring in your most conservative BSAs and Developers to “interpret” the constitution...I mean the requirements and design docs
  • If there’s time to fix any of this mess...and keep the relationships on track and stable

In many projects these documents are vague, out of date, misunderstood, and more. Most were never read, though they had many contributors and lots of meetings around them. They were created as a necessary step in the process and signed off.

They are created as necessary evils, artifacts, out of date as soon as they’re published, and inaccurate and incomplete from the start.

No wonder there were many and varied disconnects between expectations and generations of Word documents generated on the way to build and thereafter. The documents stayed static. The code was dynamic. There were demos, emails, meetings, phone calls, and hallway conversations that all impacted what was built and why, and all while those requirement documents collected digital rust.

Everyone’s busy inventing new formats for words as if it's the formatting that's going to make the requirements gathering effort yield better results. There are use cases, atomic use cases, RTMs, Business Requirement Documents, and a whole smorgasbord of document templates begging to be filled out.

What we’re missing is pictures, good old-fashioned diagrams. For every process, for every flow or subflow, you have a set of tasks to accomplish and an order in which they must occur. You have to identify the timing, the delays, the sequences, things that happen in parallel, the dependencies, and so on.


The more people stare at words, the more they write words, the more words there are. People fret about wording decisions and conditions in the negative or the positive, and so on.
Note also that document writing is generally “best” a solitary endeavor. It doesn’t lend itself easily (or efficiently) to collaboration. Sure, you can all gather around your internet meeting site and watch someone type while people shout, go for coffee, text one another, etc. Collaboration, inspiration, creativity, team-building and more aren’t fostered typically during that typically fragmenting and boring process.

What can foster creativity, participation, interpretation, and a fresh look is drawing pictures – mapping the process.

It’s so much easier to share the marker on the whiteboard or the pen on the tablet PC than to turn over the “driving” (I mean typing) to someone in a Word document complete with formatting, auto-generated section numbers, and more. ;-)

What you can’t see for all the words is what’s not there, and whether things are really in the right order.

You can’t tell how one document connects to the other, if there are common processes, common attributes amongst objects, if you’ve got all the actors accounted for, and so on.


What you need are pictures, diagrams to describe your business process, the BUSINESS FUNCTION.

The diagram below is from one of our favorite customer projects. It depicts about 50% of the actions that must occur before rendering the next screen (literally decisions and data that must be gathered between 2 screens).


Translating a lengthy text description of a process into a drawing forces you into an order. One box goes first, another goes second, and so on. Inevitably you are forced to connect things (using arrows generally) as you describe the process beginning to end. As you read a paragraph, page, or set of bullets, it finally dawns on you: You have no idea in what order these things must occur given the description. You aren’t even sure what happens AFTER these things take place, or before. In fact, you can find entire missing paths by discovering that you know what happens when a decision is true, but not what happens if it’s false. Voila! Flaws in the documentation are revealed quickly, effectively in ways that a 28th (don’t laugh they do happen) revision of the Word document never would.


Note also that it’s far easier to recognize patterns in images than in words. You start to recognize common processes, actions, when you realize you’re drawing them again. Remember with words people can vary them enough in specific word choice and order so as to disguise repetition of various things. Pattern recognition gives you the chance to lift up and get multiple common items visited together, corralled for detail and design.

Pictures can reach people that words don’t.


Here’s something else to consider: not everyone’s primary or best method of communication is verbal or written. A third or less of your clients and TEAM MEMBERS are reached better by a different communication method.

The lesson: PICTURES and WORDS will reach a far broader audience, and they will illustrate the process far more effectively and efficiently if used together. It’s really easy to communicate questions, issues, etc. when you can literally SHOW it to someone without making them READ and PARSE dense text.

With a flowchart, you can point to something and ask, “What happens here?”

Pega and DCO make a stab at this, but note that these days you have to break large processes down into “atomic use cases” before you can get them into the system. You have to get your application defined inclusive of the atomic use cases (using the Application Profiler) BEFORE you can build a single flow rule. That means you’ve already identified all your processes, all the steps, and all the common processes, etc. before you’ve generated a single application rule.

To get your use cases broken out at any level, let alone the atomic level, you’ve got to define the process. It’s right there that diagrams, flowcharts, process maps can really ease communication and cut right to the heart of the matter.

Functional design makes it possible for the business to look at a chart (or a series) and say “Yes, that’s what we want the system to DO,” and sign off. Requirements documents, textual descriptions, can capture the details, and also the “non-functional” requirements. Wireframes, and other tools can describe the look and feel of the desired user experience.

Note also that it’s really easy to show scope creep in a diagram. You can literally say, “This process was 1 page when we diagrammed it and it was signed off. Now it’s 3 pages, and the steps are in a totally different order than built.” Ah. How many pages of a MS Word document, and the various email threads, and non-existent meeting notes would one have to read to identify the scope of the changes and the ramifications for a Senior Manager with 15 minutes to consider your change request?

From functional design and requirements documents the technical design can flow. The technical design will design the code solution, how the system will function technically. It should not be confused for the functional design – defined in business terms and steps.

Get the right design in front of the right audience.

Just as business users should not be required to evaluate technical design – to say “Yes, this should be custom Java”, technical consultants should not have to parse a myriad of dense documents to SEE what the business processes are that must be enabled, to see what the system should “do” when it’s completed. Functional design should let your LSA and technical consultants digest quickly what a process does and literally SEE whether various components are reusable, common, already existing in the framework, and so on.


The next time you’re bogged down in a meeting about requirements, think about whether you should be working together, collaboratively, over an image of the process and then break that out for detailed description and design, instead of debating endlessly whether section 5.4.3.2.1 is correct as stated.

As they say, a picture is indeed worth a thousand words.

Coming soon will be a follow-up article discussing how to incorporate these tools into an iterative project. Stay tuned.

Labels: , , ,

Wednesday, August 12, 2009

This is just a varning!!

As developers, we always strive for efficiency and the hope of doing our work the best possible way. Sometimes this mentality drives us into solid walls. But being the smart people that we are, we create our own holes in the wall, break the obstacles, and allow ourselves to see the big picture.

One such example of efficiency revolves around PRPC activities and variables. There are three common ways we can store some data in memory. We can either use properties, parameters, or local variables. Properties are great in that they are static, and can be referenced basically from anywhere (assuming we know the correct clipboard page). Their downfall is that, as intended, they are static. Any temporary property that is created will always remain in the ruleset used.

The other two variables are dynamic, created as necessary, and reside only during the processing of the running activity. There are, however, distinct differences between the two. Local variables can only be used within a single activity. Parameters are more flexible and they can be passed from one activity (or flow) to another. This added flexibility adds to a higher overhead memory allocation, and make them not as efficient for quick processing. Likewise, when a process does not require the sharing of memory space, there is no reason to use a parameter over a local variable.

For troubleshooting, however, there is a bigger problem. Parameters can easily be viewable in the Tracer, where as local variables cannot.

There is a solution for this!

Since PRPC 5.3, Pega has introduced the Log-Message method. This method allows the activity to log any type of a message into the physical log file, or if necessary, to the tracer. In our activities, we can use this method to log anything we need (including local variables), depending on our logging level. More importantly, for real time tracing, we can have those same messages be displayed in the tracer for us.

Below is a screenshot of a tracer run which shows such tracer messages with local variable values.



For the tracer options, the only thing we need to do is enable the "Log Messages" option in the Events to Trace section.

Now.... just because we are using a more efficient variable, that does not mean that should start logging it every chance we get. It would probably make more sense to have the Log-Message step conditionalized, so that it runs only when needed (lets say we're running our application in Debug mode).



Of course, all of this is really irrelevant information, since we all know that our code is perfect :)