Monday 23 February 2009

Week 9 of 2009

I continued working on operations and parameters this week. Creating a clean parameter based interface for Action Language code has required a considerable amount of iteration between modelling and coding. I'm trying to find a balance that results in a clean but simple OOA, Java and XML representation for operations and parameters. If I was generating the Java and XML directly from the OOA using archetypes then this wouldn't be an issue. However, since I'm creating an egg without a chicken at present I have to factor in all three.

I found a nasty J2SE 1.5 bug in GridBagLayout which results in OOA Tool locking up when there are more than 512 rows (see Java Bug ID 5107980). I hit this MAXGRIDSIZE limit when I tried to open a Subsystem editor in the OOA Tool project. The Subsystem editor tries to list all objects and relationships showing which subsystem each is assigned to. The bug is fixed in Java SE 6 (since the limit is removed entirely) but there are no plans to fix the bug in J2SE 1.5. I added a quick workaround which traps the problem but doesn't fix it. Build 014 now shows a blank panel if the limit is exceeded rather than locking up OOA Tool. To fix the problem entirely, I would have to stop using GridBagLayout which I don't want to invest time in at present since the problem is fixed in Java SE 6.

My confidence in J2SE 1.5 took a knock after finding this problem. I've only stayed with this version since the Mac still doesn't fully support Java SE 6 even though J2SE 1.5 is now End of Life. I've now decided to move to Java SE 6 as my default platform while still retaining J2SE 1.5 support for the moment at least. The latest version of Java SE 6 is build 12 which includes the new Nimbus look and feel. It looks better than Metal so I will use this as the standard look and feel from now on (while still allowing users to select their preferred look and feel). There were a few minor display issues when I first switched to Nimbus in OOA Tool but these are fixed in Build 014.

Monday 16 February 2009

Week 8 of 2009

I've been working on mathematically dependent attributes and the framework behind their associated value calculations. This involved revisiting the Process Model subsystem to sort out the layer between models and Action Language code. The top-level concept in the Action Language subsystem is the Statement Block which defines a sequential set of executable statements along with a scope for defining local variables. My initial thought was to define different types of top-level block for value calculation, relationship navigation, actions, synchronous services, etc. However, the distinction between event data items and parameters led to some hidden dependencies in Action Language expressions. Thus, I expanded my initial generalization of operation (which previously included synchronous service and process) to include all processing activities. This means that all statement blocks are now encapsulated in an operation with defined parameters including:

  • self parameter (which normally contains an object reference),
  • received event parameters (which reference event data items),
  • simple input parameters,
  • return parameter,
  • and simple output parameters (allowing multiple return values).
Some operations have fixed input and output parameters while others have user defined input and output parameters.

Operations themselves include:

  • attribute calculations (for calculating one or more mathematically dependent attributes),
  • relationship navigations (for navigating across a mathematically dependent relationship),
  • actions (performed on entry to a state),
  • synchronous services (for defining the public interface to a domain),
  • processes,
  • functions (for defining reusable side-effect free calculations),
  • and bridge mappings (for mapping wormholes to control reception points).
Functions are like transformations within process models except that they can be defined on any operation owner and used within any operation.

Processes include:

  • simple processes (including accessors, transformations and tests),
  • state model event generators,
  • polymorphic event generators,
  • request wormholes including:
    • domain-crossing event generators,
    • and bridging processes,
  • synchronous return wormholes,
  • and asynchronous return wormholes.
In OOA91, simple processes are accessors, transformations or tests. Other processes include event generators and wormholes (added in OOA96). The objective being to partition processing into fundamental processes. However, this categorization is not really exclusive. Tests are processes returning values (normally boolean) used in conditional control flows. However, many transformations also return test values as well as their transformation outputs. Accessors are further categorized as either create accessors, read accessors, write accessors or delete accessors. OOA09 takes the view that simple processes are atomic access units that may include multiple accessors, transformations and tests. The important thing here is to ensure that hard constraints are not broken after simple processes are executed, e.g. don't create a supertype object in one process and the associated subtype object in another process. Simple processes still can't generate events and event generators still can't perform accesses or return output values in OOA09. This change in focus with regards to process partitioning is a crucial difference between OOA91 and OOA09.

While bridge mappings include:

  • request mappings,
  • synchronous return mappings,
  • asynchronous return mappings,
  • create counterpart mappings,
  • semantic shift mappings,
  • and watchpoint mappings including:
    • object created mappings,
    • object deleted mappings,
    • object updated mappings,
    • event generated mappings,
    • and operation invoked mappings.
I won't get into bridge mappings this week since it is a big topic.

I should also mention operation owners which include:

  • objects,
  • mathematically dependent relationships,
  • event destinations,
  • and bridges.
Most operations will be owned by event destinations (state models, terminators and polymorphic destinations) in OOA09 rather than objects. This is because Shlaer-Mellor is lifecycle centric rather than object centric. Objects own value calculations (for mathematically dependent attributes). Mathematically dependent relationships own relationship navigations. State models own actions, simple processes and state model event generators. Terminators own synchronous services, request wormholes (domain-crossing event generators and bridging processes) and return wormholes. Polymorphic event destinations own polymorphic event generators. Bridges own bridge mappings (all varieties). In addition, all operation owners may own one or more functions.

The next build of OOA Tool should allow all of the operations defined above to be created. However, I'm not sure yet whether I will have graphical process models in the next build. There are other issues to be resolved involving data and control flows before graphical process modelling can be delivered.

Monday 9 February 2009

Week 7 of 2009

Since nobody made any suggestions for the next build, I've gone with the first choice I gave last week, i.e. finish implementing model population as a precursor to full simulation support.

I started with referential and polymorphic attribute instances. In a deployed system, referential values (and polymorphic values mapped to referential values) would be accessed rarely if at all. Furthermore, all referential and polymorphic values must ultimately map to existing base values or be undefined. Thus, both can be calculated on demand (unless watchpoint mappings involving those attributes exist) apart from the fact that their values are accessed repeatedly when object instance tables are viewed in OOA Tool. As a consequence, referential and polymorphic values are cached in OOA Tool but this caching would not normally be required in a deployed system. These cached values need to be cleared whenever dependent base values or navigated relationship instances are changed. There is a balance to be made here between the amount of effort required to determine if a cached value should be cleared and the amount of effort required to calculate the cached value. If watchpoint mappings are listening to referential or polymorphic attribute changes (not recommended) then we will need to spend a considerable amount of time deciding whether to clear and recalculate those values. Watchpoint mappings (see Recursive Design subsystem) are not implemented in OOA Tool yet.

I then started looking at mathematically dependent attributes (called derived attributes in Executable UML). This is going to be the main feature in Build 014 since it will allow Action Language code to be executed for the first time in OOA Tool. After looking over all the mathematically dependent attributes defined in the OOA of OOA, I quickly realized that a simple one-in one-out function was not sufficient to calculate all of those attributes. Some attributes could be calculated on demand since the calculation would always be quick. However, other calculations may be very expensive. So expensive that ways need to be found to limit the number of times those values are recalculated. A Maximum Recalculation Interval is useful here for human-only information such as statistics and pretty labelling. There is no point recalculating such values more often than a human can process them. Delayed recalculation is an interesting topic which warrants further discussion. There are also situations where all object instances should have one or more mathematically dependent attribute values calculated at once, e.g. subsystem number ranges. Thus, a value calculation may map to one or more mathematically dependent attributes and may apply to one or all object instances. Next week I will add support for value calculations and implement mathematically dependent attribute instances (before implementing statement block execution logic).

Since I spent the week thinking about attributes and attribute instance implementation, I also deciding to have a think about attribute classification. I wanted to be sure I hadn't made any mistakes here. I did in fact decide to move the Naming attribute which was defined on Simple Attribute to Base Attribute after doing this analysis. I captured some of this analysis in the previous blog. The IE numbered bullet bug caused me some grief here and I ended up bypassing numbered bullets in that blog, yuk!

Thursday 5 February 2009

Attribute Classification

The attribute classification hierarchy in OOA88 and OOA91 is shown below:

Attribute
  1. 1.
    • Descriptive Attribute
    • Naming Attribute
    • Referential Attribute

  2. 2.
    • Identifying Attribute
    • Non-Identifying Attribute

The attribute classification hierarchy in OOA09 is shown below:

Attribute
  1. 1.
    • True Attribute

      • Base Attribute

        1. 1.
          • Simple Attribute
          • Arbitrary ID Attribute
          • Mathematically Dependent Attribute (Derived Attribute in Executable UML)

        2. 2.
          • Descriptive Attribute
          • Naming Attribute

      • Referential Attribute

    • Polymorphic Attribute

  2. 2.
    • Identifying Attribute
    • Non-Identifying Attribute

In the above classifications, a numbered list represents an and hierarchy while a plain list represents an or hierarchy, e.g. an attribute is true or polymorphic and identifying or non-identifying. Attributes can also be classified according to their associated data type. However, data type hierarchies will not be discussed here.

OOA96 introduced mathematically dependent attributes which are discussed in [31May08].

OOA09 introduces polymorphic and arbitrary ID attributes. Polymorphic attributes (and true attributes) are discussed in [26May08]. A detailed discussion on arbitrary ID attributes will be published soon. OOA09 also introduces base attributes (and simple attributes) because all referential attributes must resolve to a single base attribute (see [29May08]) and all base attributes are descriptive or naming attributes.

Arbitrary ID attribute could be merged with simple attribute except that it has a very different lifecycle. Although arbitrary ID attributes can be assigned values like simple attributes, any assigned values are temporary in nature since the allocation of arbitrary ID values is platform controlled not user controlled. Anyone who assigns a value to an arbitrary ID attribute needs to understand how long that value will persist (arbitrary ID types provide some control here but not complete control). For ordinal arbitrary ID values, the user is rarely interested in the exact value, only it's ordering relative to other object instances. For non-ordinal arbitrary ID values, the user is almost never interested in the exact value. Furthermore, the user may never actually need to access such values at all, i.e. the arbitrary ID attribute may only exist in the information model to formalize one or more relationships.

Naming attributes are those representing arbitrary names and labels. Most but not all arbitrary ID attributes are naming attributes. Mathematically dependent attributes are often used to create formatted names and labels from more primitive naming attributes. The distinction between descriptive and naming attributes is not essential for code generation purposes and many information models will ignore this classification defining all base attributes as descriptive attributes.

Identifying attributes are central to the idea that relationships should be formalized in information model specific terms allowing constraints between relationships to be clearly specified. However, the object identities defined by identifying attributes are not fixed and may change frequently depending on the number and nature of the identifying attributes that compose each identifier. The reason that mathematically dependent, referential and polymorphic attributes may be identifying attributes is that the implementation of object identity within Action Language code is normally based on object instance references (or handles). It only appears within an information model that object identity is implemented using identifying attributes. Of course, this appearance was reinforced in OOA91 because Lifecycle Model directed events carry identifying attributes to reference destination object instances. This is no longer the case in OOA09.

Some would argue that a software architecture should be free to choose how it implements object identity. However, the reality is that you can only use mathematically dependent, referential and polymorphic attributes as identifying attributes on secondary identifiers unless you use some concept of object instance reference. And even if you do stick to simple attributes for identifying attributes, you still need to be able to handle changes to those attributes. You can of course, use identifying attributes to find object instances (using a select statement) when those object instances are known to be consistent. However, in most cases, object instances are found by navigating relationships and relationship instances are best realized using object instance references (not identifying attributes).

Monday 2 February 2009

Week 6 of 2009

I finally released OOA Tool 1.0 BETA Build 013 last Friday. It shouldn't have taken three months to roll out all of the features in that release. Unfortunately, I started making changes on a number of parallel tasks and didn't want to release a broken build. I will try to avoid tackling too many parallel tasks in the next build. The main features added in Build 013 were:

  • expanded and fully implemented data types,
  • metamodel population generation for version 0.01 of the official OOA of OOA,
  • model population editing,
  • project matrix support,
  • and a new Executable UML2 model for the Executable UML Foundation (fUML).

All of the data types defined in the official metamodel Data Dictionary subsystem are now supported in OOA Tool. However, there are a few data types not in the official metamodel yet, e.g. reference types, return coordinate types, transfer vector types and abstract types. The additional data types will be added as they become needed.

The metamodel population support is sufficient for generation of Information Model Reports now. I just need to reintegrate the translator I wrote last year into the project population framework that underpins the model and metamodel populations. There was one feature that I had thought about but haven't implemented yet and that is the ability to link a metamodel population to an external metamodel project. As discussed in last week's weekly report, an internal metamodel is used to ensure Java population coding doesn't get out of sync with the metamodel. However, this means all attributes are defined as simple attributes causing some (e.g. arbitrary ID attributes) to be flagged in red when you browse metamodel population data. It would be nice if OOA Tool allowed you to reference an external metamodel which would include correctly typed and fully resolved attributes. However, the external metamodel would still need to be validated against the internal metamodel when it is loaded.

The model population support allows object instances and relationship instances to be created and edited. However, it doesn't support mathematically dependent attributes, referential attributes or polymorphic attributes yet. These attributes remain undefined when object instances are created. It also doesn't support all forms of arbitrary ID attribute yet. Population constraint validation is also missing at the moment. However, this requires some notation of transaction to be effective. More on this in the future.

The project matrix support was a secondary feature. I'm not planning on doing any more work on project matrix tasks or activities prior to getting OOA Tool out of beta.

The fUML modelling I did was part of my effort to keep abreast of what is happening in the OMG with regards to making UML executable. It should be possible to create an executable fUML library from an OOA09 project that could be executed in a third party UML tool or load an fUML library into OOA09 as an implementation domain. The limitation here would be what event dispatch scheduler and polymorphic operation dispatcher the external fUML library would require. fUML defines variation points so that the current Executable UML tool vendors don't have to agree a standard policy on event scheduling and polymorphic dispatching. The other sticking point here is the XMI standard which is basically a waste of space! The only way this will ever work is if everyone adopts a single interpretation, e.g. the Eclipse implementation. However, you still have the basic problem that XMI is independent of the UML metamodel and both are regularly being changed. The OOA Interchange Format on the other hand is independent of the OOA of OOA since it defines an implicit OOA of OOA within it's DTD. The purpose of the OOA Interchange Format is to allow the exchange of projects between tools. The purpose of the OOA of OOA is to define a data model for translation purposes. Obviously, changes in one will effect the other but these changes are controlled separately. In my opinion this is a major scalability flaw in the XMI/UML design.

Now to discuss the next build, i.e. what should be in it? There are a number of areas outstanding that can be tackled now:

  • finish model population support by implementing non-simple attributes including mathematically dependent attributes (this will require the Action Language to be documented and put under change control),
  • integrate and update the previously implemented translator based on BridgePoint's old Archetype Language (this will require the Archetype Language to be documented and put under change control),
  • clean up the integration of patterns with symbolic types (allowing plugin alternatives), defining the default Pattern Language in a separate domain and ensuring it is documented and put under change control,
  • fix some outstanding issues with the state modelling design and merge the State Model subsystem into the official metamodel,
  • finish process modelling design and implementation,
  • and finish off the remaining technical notes on Shlaer-Mellor and Executable UML notation (and I'm sure I will find a few presentation issues I need to fix in OOA Tool when I do so).
My current plan is to start with the first item. However, I'm willing to listen to suggestions. Does anyone have any strong feelings about what should be implemented and released next?

One final thing. I would like to thank Kennedy Carter and Ian Wilkie in particular for making the following white papers publically available: