He pointed out that it seems problematic that Eclipse projects are not considered reference implementations of OMG specifications; clearly a specification cannot be proven sound without a reference implementation. Past experience with the implementation of UML2 has demonstrated that the feedback from problem determination in the reference implementations to their resolution in the specification has value. Ecore's influence on MOF leading to EMOF is another example.
He talked about the various processes driving specifications at the OMG and those driving project development at Eclipse. They're quite similar. Eclipse has very regular release cycles (e.g., today, woo hoo!) so trying to synchronize the OMG's specification to have a more regular delivery pattern would have value. Tying specifications to reference implementations would improve the likelihood of success for both the project and the specification. There is often a gap between the high level intent and the actual realization of that in code. The people and organizations working on the specifications are often an entirely different group of people than those working on the project. Surely that can't be ideal. A certified reference implementation would eliminate issues such as ambiguities in how a specification is interpreted. The audience asked questions about how problem reporting at the OMG can be more easily tracked in a way similar to how bugzilla is used at Eclipse.
After an overview of the rest of the day's agenda, Pete Rivette, CTO of Adaptive presented some background about the challenge of CMOF (Complete Meta Object Facility). He talked about the history of MOF and how it gradually evolved away for its CORBA roots and about the need for IDL interfaces for everything. At the MOF 1.4 level, its mapping to Java was codified in the JCP as JMI. MOF 2.0 was developed in tandem with UML 2.0 and included a better separation of concerns. The separation into EMOF and CMOF which of course was driven to a large extent by the influence of EMF's Ecore, and hence model driven Java development, was the primary montivator. CMOF was more driven by the needs for meta model developers. He described the overall MOF2 structure and its dependencies. He discussed the Java Interface for MOF, JIM effort, which is basically a stalled effort.
CMOF includes things like full fledged associations, association generalization, property subsetting and redefinition, derived unions, and package merge. He motivated the use cases for these capabilities. Then he talked about what Kenn has done to support CMOF on top of EMF's more basic EMOF support. He'd like to see more complete support for CMOF, e.g., a UML profile and a MOF DSL, i.e., a GMF tool. Better package merge tools for metamodel selection and static flattening. Conversion between UML2 and CMOF XMI is mostly done. A standard conversion or mapping between EMOF and CMOF would be very useful. He'd like to see a CMOF Java API that's compatible with EMOF/EMF and of course to see a specified mapping of EMOF onto Java.
He too would like to see better coordination between OMG and Eclipse. We need to think about what to do about MOF 2.1 and what's the best direction for that. Constraints are important when defining models so improved support for expressing and validating them is key. Diagram definition for visual rendering of models would be useful. He mentioned some interesting future work, such as Semantic MOF, which supports things like an object changing its class and having multiple classes at once. It sound cool, but makes me cringe. A few questions came up, related to semantic MOF, and that started to hurt my brain.
After the break, James Bruck gave a presentation to share his experiences with implementing an OMG specification, namely UML2. He described the split between "those who make things happen," i.e., the developers, and the folks working on the standards. Issue resolution in the specification is very important when driving an implementation. A specific recent issue that's come up is with how to represent Java-style generics in UML. Certainly profiles could be used, but it would be better expressed directly in the meta model. He gives many examples where the language of the specification is inconsistent with the normative model underlying that specification. These things come to light when folks implement the specification and compare the implemented behavior with the descriptive text guiding their implementation decisions. It's very hard to track changes to the specification when updating the implementation to conform to the latest version of the specification.
He suggests the specification should include architectural intent to motivate the reasons for the design. Proper summary of changes as the specification evolves is very important for understanding the impact on the implementation. Harvesting information surfaced by reference implementations is key. A proper tracking mechanism, like bugzilla for issues would be very useful; in fact, why not just use bugzilla. There's certainly some room for improvement. Communicating the developer's ideas back to the OMG is sometimes like a game of telephone that involves information loss and information injection. There was some discussion about the organizational structure at OMG that make individual involvement difficult.
Victor Roldan of Open Canarias talked about their approach to implementing QVT. They had problems with the specification including inconsistencies, ambiguities, insufficiencies and lack of a reference implementation basis. Why is there a V in Query View Transformations when it doesn't really factor into the actual specification, he asks. The split between base and relations is imperfect. There is just a long list of things that obviously didn't come to light until someone tried to implement what was described in English. No one seems interested in Core and instead Relations are tackled directly. And it's just not expressive enough beyond toy examples.
Their solutions is analogous to Java's translation to byte code, i.e, they have Atomic Transformation Code, ATC, to represent low level primitives and then provide a Virtual Transformation Engine to execute those primitives. He gave a quick demo of the tool in action. He expressed frustration with interacting with the OMG as a non-member.
Victor Sanchez followed up with more specific details to motivate the virtues of a virtual machine approach to tackling the QVT problem. Of course it's a widely used approach taken in a number of other situations and hence is an approach that's proven itself. Some problems he noted were things like lack of support for regular expressions. He sees QVT as analogous to UML as a standard notation in the sense of being unified, but not universal. One size fits all is never ideal for anything in particular though, so he imagines domain specific languages as being a good way to specialize the power and generality of QVT.
Victor believes there would be value in having a low level OMG specification for a transformation virtual machine. It would make it easier to build new transformation languages and provide a separation of concerns that would allow for focus on optimization at the virtual machine level. It would be an enabler to drive transformation technology in general. There was some discussion about how this technology relates to existing technology being provided by ATL at Eclipse.
Next up was Christian Damus of Zeligsoft to talk about OCL. It started out as an IBM Research project in 2003 and was part of RSA 6.0 in 2006. It was contributed to Eclipse in 2006 as part of EMFT and is current in the MDT project. It currently supports both Ecore and UML as the underlying target meta models. The specification has been a moving target with quite significant changes from version to version. Of course changes to UML also had impacts and it's not as well aligned with UML 2.x as it should be with many references still to older versions of UML. Often diagrams and descriptions are inconsistent. Even informative descriptions verses normative definitions are not always in agreement. There was confusion between the invalid type and the invalid value.
There also seemed to be important things missing, such as a concrete syntax for CMOF, missing descriptions for TypeType. There is no support for reflection, i.e., for getting at the class of a value. Often he was a position to have to guess how to interpret the specification and some guesses proved inconsistent with subsequent clarifications. OclVoid and OclInvalid are problematic to implement since they are each values that must conform to all types. There are issues with expressiveness, such as a poor set of string-based operations. Serialization of demand created types are a problem. Eclipse's API rules don't alway allow one to easily address changes in the specification in a binary compatible way. A normative CMOF model would be very useful to have.
Elisa Kendall of Sandpiper Software talked about Ontology Definition Model. Ontology is all about semantics. There is a need to describe the underlying meaning of documents, messages, and all the other data that flows on our information web. We need to bring order to the syntactic chaos we have today. So what is an ontology? An ontology specifies a rich description of the terminology, concepts, and nomenclature, properties explicitly defining concepts and relations among concepts, rules distinguishing concepts, refining definitions and relations, constraints, restrictions, and regular expression, relevant to particular domain or area of interest. In a knowledge representation you have vocabulary (the basic symbols), syntax (rules for forming structure), semantics (the meaning of those structures), and rules of inference for deriving new facts. Elisa is interested in rebooting the EODM project at Eclipse. There is increasing interest in ODM in the industry as ODM itself matures.
Here's my personal summary of the problems that I raised in my closing remarks.
- Lack of reference implementations to validate specifications leads to lower quality specifications.
- The divide between developers and specification writers results in a communication gap that leads to loss of information.
- Problems with issue reporting and then tracking them to their resolution is frustrating; it's not a transparent system.
- How best to involve non members who have something to contribute?
- Alignment between release schedules of the implementations and the specifications would have value.
- Specification changes are not as consumable as they could be and sometimes result in binary incompatible API changes.
- Why does modeling have a bad reputation?
- Doesn't it take much longer to work on a specification rather than do something innovative in open source; de facto standards are common place.
- Communities are empowered by open source. Those who make things happen, lead the way; those who talk will be spectators.
After the meeting I spent time with Dominque and Diarmuid, seen here with Jean.
Then I went to the OMG reception and finally I had dinner with Jean and company. Finally it was time for this late blog, in accordance with my same-day blog policy. Gosh but it's a tiring policy! I wonder how many notes I'll have to look at after I'm done...