After a whirl wind week of upheaval, I've arrived in geek heaven. I blogged about the OMG Eclipse Symposium, but I didn't have time to blog about the next day, which I also spent in Ottawa. On Thursday, I visited the my colleges at the OTI lab. John Duimovich is my mentor so we had a chance for a good chat. It seems someone held him accountable for my impending departure; they've demoted him to flipping burgers and weenies. He mumbled something about less responsibility and more time for fishing.
It was great to spend time with so many of my platform buddies. I expect I'll be working with them a lot because of e4. That evening I'd booked a very late flight home so I could attend the Ganymede demo camp in Ottawa.
There were some interesting demos. When Lynn asked if I had anything to demo, I didn't realize I was committing to having to give an "official" demo. So I had an extra class of wine and showed the graphical Ecore Tools editor in action. As you know, I love that thing!
Ian gave out some prizes at the end. It was a fun event and I'm glad I stayed for it.
I got home late Thursday and then the next morning I began setting up my brand new T61p which I've named toad. It's got a 1920x1200 resolution display, duo core 2.5 GHz, 3 GM RAM, and 200GB drive. It's a heck of a lot better than what I had before! So it was time to ditch all the old hardware and setup all the new hardware. Check out the great setup I have now.
Of course KC uses my chair any time I'm not using it. There's a dog basket on the floor should the girls tire of being in my lap. I also have a nice foot rest. There's no longer a desktop machine, which was actually more of a floor top. Note how many frogs are in the room? If you could see the frog calendar up close, you'd see July 4th and July 18th marked with "woo hoo" because the former is my last work day (US Independence Day coincidentally enough) and the latter is the end of my two weeks of paid vacation time. Let me show you a closeup of the work area where all the magic happens.
Some things to note. I have a docking station that lets my Thinkpad hook up directly to the network, printer, keyboard, monitor, and power backup; though I do have wireless as well. I can't live without my little red ultranav mouse controller, so now I have two. See how I've hooked up the monitor as a dual monitor. I'll be able to keep things like Thunderbird and my chat windows in full view all the time; I've hooked Thunderbird up to the newsgroups and to my gmail account, Ed.Merks, using IMAP. Of course I have a pretty picture of the garden as background. I have pidgin installed, which I can use to chat with folks on IRC, Google, MSN, and Yahoo with just that one application. And I have Skype, which for $30 lets me talk to anyone in North America for a whole year; see the nice Logitech head set I use for that.
I installed Firefox 3.0 and went extension crazy. There are a few really cool ones, like Interclue which shows a little icon when you hover over a link and when you hover over that icon, it shows you a preview of the page at that link; the picture above shows this in action. I can also faviconize my tabs like the one for planet eclipse so I have room for lots of tabs. The tab to undelete a deleted tab is handy too.
Here's a close up of the little shrine I have for my treasured Eclipse Community Awards. Without community-inspired confidence, I would not have taken the bold steps I did this week.
There's a beautiful beta in the jar beside my monitor.
He and the blue damsel in the salt water tank right next to the jar like to square off through the glass.
While the frogs don't do a great job on eliminating the flies that occasionally sneak into the house, if KC doesn't get them (and he's quite good at that), I can always drop them into my pitcher plant.
When I looked out my window on Friday, the fox came scurrying by. I couldn't grab my camera quick enough to get a really good shot.
I feel like I've reached nirvana!
Well, it's time to do some more planting.
Sunday, June 29, 2008
Wednesday, June 25, 2008
Eclipse OMG Symposium
The symposium started with an introduction by Kenn Hussey who talked about the nature of open specifications, open source, and how the two relate. Interchange is a key aspect. Reference implementations speed delivery of solutions to market through cost savings gained from a shared collaborative effort. Value is derived by specializing the common basis for integration. He talked about the large number of OMG specifications implemented at Eclipse and described the principles that drive Eclipse; a focus on extensible frameworks being one of them.
He pointed out that it seems problematic that Eclipse projects are not considered reference implementations of OMG specifications; clearly a specification cannot be proven sound without a reference implementation. Past experience with the implementation of UML2 has demonstrated that the feedback from problem determination in the reference implementations to their resolution in the specification has value. Ecore's influence on MOF leading to EMOF is another example.
He talked about the various processes driving specifications at the OMG and those driving project development at Eclipse. They're quite similar. Eclipse has very regular release cycles (e.g., today, woo hoo!) so trying to synchronize the OMG's specification to have a more regular delivery pattern would have value. Tying specifications to reference implementations would improve the likelihood of success for both the project and the specification. There is often a gap between the high level intent and the actual realization of that in code. The people and organizations working on the specifications are often an entirely different group of people than those working on the project. Surely that can't be ideal. A certified reference implementation would eliminate issues such as ambiguities in how a specification is interpreted. The audience asked questions about how problem reporting at the OMG can be more easily tracked in a way similar to how bugzilla is used at Eclipse.
After an overview of the rest of the day's agenda, Pete Rivette, CTO of Adaptive presented some background about the challenge of CMOF (Complete Meta Object Facility). He talked about the history of MOF and how it gradually evolved away for its CORBA roots and about the need for IDL interfaces for everything. At the MOF 1.4 level, its mapping to Java was codified in the JCP as JMI. MOF 2.0 was developed in tandem with UML 2.0 and included a better separation of concerns. The separation into EMOF and CMOF which of course was driven to a large extent by the influence of EMF's Ecore, and hence model driven Java development, was the primary montivator. CMOF was more driven by the needs for meta model developers. He described the overall MOF2 structure and its dependencies. He discussed the Java Interface for MOF, JIM effort, which is basically a stalled effort.
CMOF includes things like full fledged associations, association generalization, property subsetting and redefinition, derived unions, and package merge. He motivated the use cases for these capabilities. Then he talked about what Kenn has done to support CMOF on top of EMF's more basic EMOF support. He'd like to see more complete support for CMOF, e.g., a UML profile and a MOF DSL, i.e., a GMF tool. Better package merge tools for metamodel selection and static flattening. Conversion between UML2 and CMOF XMI is mostly done. A standard conversion or mapping between EMOF and CMOF would be very useful. He'd like to see a CMOF Java API that's compatible with EMOF/EMF and of course to see a specified mapping of EMOF onto Java.
He too would like to see better coordination between OMG and Eclipse. We need to think about what to do about MOF 2.1 and what's the best direction for that. Constraints are important when defining models so improved support for expressing and validating them is key. Diagram definition for visual rendering of models would be useful. He mentioned some interesting future work, such as Semantic MOF, which supports things like an object changing its class and having multiple classes at once. It sound cool, but makes me cringe. A few questions came up, related to semantic MOF, and that started to hurt my brain.
After the break, James Bruck gave a presentation to share his experiences with implementing an OMG specification, namely UML2. He described the split between "those who make things happen," i.e., the developers, and the folks working on the standards. Issue resolution in the specification is very important when driving an implementation. A specific recent issue that's come up is with how to represent Java-style generics in UML. Certainly profiles could be used, but it would be better expressed directly in the meta model. He gives many examples where the language of the specification is inconsistent with the normative model underlying that specification. These things come to light when folks implement the specification and compare the implemented behavior with the descriptive text guiding their implementation decisions. It's very hard to track changes to the specification when updating the implementation to conform to the latest version of the specification.
He suggests the specification should include architectural intent to motivate the reasons for the design. Proper summary of changes as the specification evolves is very important for understanding the impact on the implementation. Harvesting information surfaced by reference implementations is key. A proper tracking mechanism, like bugzilla for issues would be very useful; in fact, why not just use bugzilla. There's certainly some room for improvement. Communicating the developer's ideas back to the OMG is sometimes like a game of telephone that involves information loss and information injection. There was some discussion about the organizational structure at OMG that make individual involvement difficult.
Victor Roldan of Open Canarias talked about their approach to implementing QVT. They had problems with the specification including inconsistencies, ambiguities, insufficiencies and lack of a reference implementation basis. Why is there a V in Query View Transformations when it doesn't really factor into the actual specification, he asks. The split between base and relations is imperfect. There is just a long list of things that obviously didn't come to light until someone tried to implement what was described in English. No one seems interested in Core and instead Relations are tackled directly. And it's just not expressive enough beyond toy examples.
Their solutions is analogous to Java's translation to byte code, i.e, they have Atomic Transformation Code, ATC, to represent low level primitives and then provide a Virtual Transformation Engine to execute those primitives. He gave a quick demo of the tool in action. He expressed frustration with interacting with the OMG as a non-member.
Victor Sanchez followed up with more specific details to motivate the virtues of a virtual machine approach to tackling the QVT problem. Of course it's a widely used approach taken in a number of other situations and hence is an approach that's proven itself. Some problems he noted were things like lack of support for regular expressions. He sees QVT as analogous to UML as a standard notation in the sense of being unified, but not universal. One size fits all is never ideal for anything in particular though, so he imagines domain specific languages as being a good way to specialize the power and generality of QVT.
Victor believes there would be value in having a low level OMG specification for a transformation virtual machine. It would make it easier to build new transformation languages and provide a separation of concerns that would allow for focus on optimization at the virtual machine level. It would be an enabler to drive transformation technology in general. There was some discussion about how this technology relates to existing technology being provided by ATL at Eclipse.
Next up was Christian Damus of Zeligsoft to talk about OCL. It started out as an IBM Research project in 2003 and was part of RSA 6.0 in 2006. It was contributed to Eclipse in 2006 as part of EMFT and is current in the MDT project. It currently supports both Ecore and UML as the underlying target meta models. The specification has been a moving target with quite significant changes from version to version. Of course changes to UML also had impacts and it's not as well aligned with UML 2.x as it should be with many references still to older versions of UML. Often diagrams and descriptions are inconsistent. Even informative descriptions verses normative definitions are not always in agreement. There was confusion between the invalid type and the invalid value.
There also seemed to be important things missing, such as a concrete syntax for CMOF, missing descriptions for TypeType. There is no support for reflection, i.e., for getting at the class of a value. Often he was a position to have to guess how to interpret the specification and some guesses proved inconsistent with subsequent clarifications. OclVoid and OclInvalid are problematic to implement since they are each values that must conform to all types. There are issues with expressiveness, such as a poor set of string-based operations. Serialization of demand created types are a problem. Eclipse's API rules don't alway allow one to easily address changes in the specification in a binary compatible way. A normative CMOF model would be very useful to have.
Elisa Kendall of Sandpiper Software talked about Ontology Definition Model. Ontology is all about semantics. There is a need to describe the underlying meaning of documents, messages, and all the other data that flows on our information web. We need to bring order to the syntactic chaos we have today. So what is an ontology? An ontology specifies a rich description of the terminology, concepts, and nomenclature, properties explicitly defining concepts and relations among concepts, rules distinguishing concepts, refining definitions and relations, constraints, restrictions, and regular expression, relevant to particular domain or area of interest. In a knowledge representation you have vocabulary (the basic symbols), syntax (rules for forming structure), semantics (the meaning of those structures), and rules of inference for deriving new facts. Elisa is interested in rebooting the EODM project at Eclipse. There is increasing interest in ODM in the industry as ODM itself matures.
Here's my personal summary of the problems that I raised in my closing remarks.
After the meeting I spent time with Dominque and Diarmuid, seen here with Jean.
Then I went to the OMG reception and finally I had dinner with Jean and company. Finally it was time for this late blog, in accordance with my same-day blog policy. Gosh but it's a tiring policy! I wonder how many notes I'll have to look at after I'm done...
He pointed out that it seems problematic that Eclipse projects are not considered reference implementations of OMG specifications; clearly a specification cannot be proven sound without a reference implementation. Past experience with the implementation of UML2 has demonstrated that the feedback from problem determination in the reference implementations to their resolution in the specification has value. Ecore's influence on MOF leading to EMOF is another example.
He talked about the various processes driving specifications at the OMG and those driving project development at Eclipse. They're quite similar. Eclipse has very regular release cycles (e.g., today, woo hoo!) so trying to synchronize the OMG's specification to have a more regular delivery pattern would have value. Tying specifications to reference implementations would improve the likelihood of success for both the project and the specification. There is often a gap between the high level intent and the actual realization of that in code. The people and organizations working on the specifications are often an entirely different group of people than those working on the project. Surely that can't be ideal. A certified reference implementation would eliminate issues such as ambiguities in how a specification is interpreted. The audience asked questions about how problem reporting at the OMG can be more easily tracked in a way similar to how bugzilla is used at Eclipse.
After an overview of the rest of the day's agenda, Pete Rivette, CTO of Adaptive presented some background about the challenge of CMOF (Complete Meta Object Facility). He talked about the history of MOF and how it gradually evolved away for its CORBA roots and about the need for IDL interfaces for everything. At the MOF 1.4 level, its mapping to Java was codified in the JCP as JMI. MOF 2.0 was developed in tandem with UML 2.0 and included a better separation of concerns. The separation into EMOF and CMOF which of course was driven to a large extent by the influence of EMF's Ecore, and hence model driven Java development, was the primary montivator. CMOF was more driven by the needs for meta model developers. He described the overall MOF2 structure and its dependencies. He discussed the Java Interface for MOF, JIM effort, which is basically a stalled effort.
CMOF includes things like full fledged associations, association generalization, property subsetting and redefinition, derived unions, and package merge. He motivated the use cases for these capabilities. Then he talked about what Kenn has done to support CMOF on top of EMF's more basic EMOF support. He'd like to see more complete support for CMOF, e.g., a UML profile and a MOF DSL, i.e., a GMF tool. Better package merge tools for metamodel selection and static flattening. Conversion between UML2 and CMOF XMI is mostly done. A standard conversion or mapping between EMOF and CMOF would be very useful. He'd like to see a CMOF Java API that's compatible with EMOF/EMF and of course to see a specified mapping of EMOF onto Java.
He too would like to see better coordination between OMG and Eclipse. We need to think about what to do about MOF 2.1 and what's the best direction for that. Constraints are important when defining models so improved support for expressing and validating them is key. Diagram definition for visual rendering of models would be useful. He mentioned some interesting future work, such as Semantic MOF, which supports things like an object changing its class and having multiple classes at once. It sound cool, but makes me cringe. A few questions came up, related to semantic MOF, and that started to hurt my brain.
After the break, James Bruck gave a presentation to share his experiences with implementing an OMG specification, namely UML2. He described the split between "those who make things happen," i.e., the developers, and the folks working on the standards. Issue resolution in the specification is very important when driving an implementation. A specific recent issue that's come up is with how to represent Java-style generics in UML. Certainly profiles could be used, but it would be better expressed directly in the meta model. He gives many examples where the language of the specification is inconsistent with the normative model underlying that specification. These things come to light when folks implement the specification and compare the implemented behavior with the descriptive text guiding their implementation decisions. It's very hard to track changes to the specification when updating the implementation to conform to the latest version of the specification.
He suggests the specification should include architectural intent to motivate the reasons for the design. Proper summary of changes as the specification evolves is very important for understanding the impact on the implementation. Harvesting information surfaced by reference implementations is key. A proper tracking mechanism, like bugzilla for issues would be very useful; in fact, why not just use bugzilla. There's certainly some room for improvement. Communicating the developer's ideas back to the OMG is sometimes like a game of telephone that involves information loss and information injection. There was some discussion about the organizational structure at OMG that make individual involvement difficult.
Victor Roldan of Open Canarias talked about their approach to implementing QVT. They had problems with the specification including inconsistencies, ambiguities, insufficiencies and lack of a reference implementation basis. Why is there a V in Query View Transformations when it doesn't really factor into the actual specification, he asks. The split between base and relations is imperfect. There is just a long list of things that obviously didn't come to light until someone tried to implement what was described in English. No one seems interested in Core and instead Relations are tackled directly. And it's just not expressive enough beyond toy examples.
Their solutions is analogous to Java's translation to byte code, i.e, they have Atomic Transformation Code, ATC, to represent low level primitives and then provide a Virtual Transformation Engine to execute those primitives. He gave a quick demo of the tool in action. He expressed frustration with interacting with the OMG as a non-member.
Victor Sanchez followed up with more specific details to motivate the virtues of a virtual machine approach to tackling the QVT problem. Of course it's a widely used approach taken in a number of other situations and hence is an approach that's proven itself. Some problems he noted were things like lack of support for regular expressions. He sees QVT as analogous to UML as a standard notation in the sense of being unified, but not universal. One size fits all is never ideal for anything in particular though, so he imagines domain specific languages as being a good way to specialize the power and generality of QVT.
Victor believes there would be value in having a low level OMG specification for a transformation virtual machine. It would make it easier to build new transformation languages and provide a separation of concerns that would allow for focus on optimization at the virtual machine level. It would be an enabler to drive transformation technology in general. There was some discussion about how this technology relates to existing technology being provided by ATL at Eclipse.
Next up was Christian Damus of Zeligsoft to talk about OCL. It started out as an IBM Research project in 2003 and was part of RSA 6.0 in 2006. It was contributed to Eclipse in 2006 as part of EMFT and is current in the MDT project. It currently supports both Ecore and UML as the underlying target meta models. The specification has been a moving target with quite significant changes from version to version. Of course changes to UML also had impacts and it's not as well aligned with UML 2.x as it should be with many references still to older versions of UML. Often diagrams and descriptions are inconsistent. Even informative descriptions verses normative definitions are not always in agreement. There was confusion between the invalid type and the invalid value.
There also seemed to be important things missing, such as a concrete syntax for CMOF, missing descriptions for TypeType. There is no support for reflection, i.e., for getting at the class of a value. Often he was a position to have to guess how to interpret the specification and some guesses proved inconsistent with subsequent clarifications. OclVoid and OclInvalid are problematic to implement since they are each values that must conform to all types. There are issues with expressiveness, such as a poor set of string-based operations. Serialization of demand created types are a problem. Eclipse's API rules don't alway allow one to easily address changes in the specification in a binary compatible way. A normative CMOF model would be very useful to have.
Elisa Kendall of Sandpiper Software talked about Ontology Definition Model. Ontology is all about semantics. There is a need to describe the underlying meaning of documents, messages, and all the other data that flows on our information web. We need to bring order to the syntactic chaos we have today. So what is an ontology? An ontology specifies a rich description of the terminology, concepts, and nomenclature, properties explicitly defining concepts and relations among concepts, rules distinguishing concepts, refining definitions and relations, constraints, restrictions, and regular expression, relevant to particular domain or area of interest. In a knowledge representation you have vocabulary (the basic symbols), syntax (rules for forming structure), semantics (the meaning of those structures), and rules of inference for deriving new facts. Elisa is interested in rebooting the EODM project at Eclipse. There is increasing interest in ODM in the industry as ODM itself matures.
Here's my personal summary of the problems that I raised in my closing remarks.
- Lack of reference implementations to validate specifications leads to lower quality specifications.
- The divide between developers and specification writers results in a communication gap that leads to loss of information.
- Problems with issue reporting and then tracking them to their resolution is frustrating; it's not a transparent system.
- How best to involve non members who have something to contribute?
- Alignment between release schedules of the implementations and the specifications would have value.
- Specification changes are not as consumable as they could be and sometimes result in binary incompatible API changes.
- Why does modeling have a bad reputation?
- Doesn't it take much longer to work on a specification rather than do something innovative in open source; de facto standards are common place.
- Communities are empowered by open source. Those who make things happen, lead the way; those who talk will be spectators.
After the meeting I spent time with Dominque and Diarmuid, seen here with Jean.
Then I went to the OMG reception and finally I had dinner with Jean and company. Finally it was time for this late blog, in accordance with my same-day blog policy. Gosh but it's a tiring policy! I wonder how many notes I'll have to look at after I'm done...
Monday, June 23, 2008
So Long and Thanks for All the Open Source
After almost sixteen years at IBM I've made the most difficult decision since leaving Vancouver to work for IBM in Toronto: I've resigned from IBM. Where will I be going? I'm not sure yet, but don't expect to see any less of me!
Being independent, like Jeff and Chris, certainly has a great deal of appeal. I'm also very excited by the things I see Itemis, Obeo, TOPCASED, and the rest of the diverse community doing in the modeling space. Of course to say I'm excited by what's happening is an understatement since that's the primary motivation for my decision: to focus on exciting and innovative new things of my own choosing. The Eclipse community is both inspiring and empowering. I'm grateful to be part of it. Expect that to continue and then some as I immerse myself fully.
So long IBM, and thanks for all the open source!
Being independent, like Jeff and Chris, certainly has a great deal of appeal. I'm also very excited by the things I see Itemis, Obeo, TOPCASED, and the rest of the diverse community doing in the modeling space. Of course to say I'm excited by what's happening is an understatement since that's the primary motivation for my decision: to focus on exciting and innovative new things of my own choosing. The Eclipse community is both inspiring and empowering. I'm grateful to be part of it. Expect that to continue and then some as I immerse myself fully.
So long IBM, and thanks for all the open source!
Saturday, June 21, 2008
Was Gany Good to You?
I'd like to think I was good for Gany but definitely Gany was very good to me! The recent SD Times 100 article recognizing the best and brightest in our industry lists EMF has a "top-shelf" modeling tool. It's a very proud accomplishment for our open source community to be listed along side commercial offerings! My shrinking development team has never had the capacity to build truly excellent tools---we've needed to keep our focus on making the framework powerful---so I'm absolutely thrilled that David Sciamma, Lucas Bigeardel, Gilles Cannenterre, and Jacques Lescot have helped move EMF to the top shelf. I've been playing with the Ganymede version of Ecore Tools and I'm so happy I must blog about it on this sunny Saturday morning; there are few things finer than piddling on the flowers!
One of the great things about Ecore is that it's small and simple yet extremely expressive and powerful.
Even when you render all the basic attributes, references, and operations in a single diagram, it still fits a page. This diagram is one of my absolute favorites. It's practically all you need to to understand the entire Ecore API. I know it's a cliché to say a picture is worth a thousand words, but when it comes to modeling, it's such an important aspect to the big picture.
Creating a beautiful diagram is even more of an art than creating beautiful code. Notice how the above diagram only has a single line cross?
In EMF 2.3 we added generics to Ecore. Just as for Java's generics, you can always consider just the erased view of a model. The preceding diagram is that erased view. The green classes and their references represent what was added. The erasure involves deleting all ETypeParameters and replacing each reference to EGenericType with its eRawType.
EObject has quite a few methods, so we show it, and EAnnotation's use of it, in a separate diagram.
And finally we have all the data types needed to define the Ecore API.
Clearly Ecore Tools is more than just adequate for my diagramming needs. Finally I can throw away my moldy old Rose 98! Unlike this peony rose, Rose 98 has definitely lost its luster.
So thanks Ecore Tools team, you're one of the brightest flowers in my garden! You've definitely been good to Gany and to me.
Well, it's time to tend my garden. Most of my 10,000+ seedlings still need to be planted, and it's a lovely day today.
One of the great things about Ecore is that it's small and simple yet extremely expressive and powerful.
Even when you render all the basic attributes, references, and operations in a single diagram, it still fits a page. This diagram is one of my absolute favorites. It's practically all you need to to understand the entire Ecore API. I know it's a cliché to say a picture is worth a thousand words, but when it comes to modeling, it's such an important aspect to the big picture.
Creating a beautiful diagram is even more of an art than creating beautiful code. Notice how the above diagram only has a single line cross?
In EMF 2.3 we added generics to Ecore. Just as for Java's generics, you can always consider just the erased view of a model. The preceding diagram is that erased view. The green classes and their references represent what was added. The erasure involves deleting all ETypeParameters and replacing each reference to EGenericType with its eRawType.
EObject has quite a few methods, so we show it, and EAnnotation's use of it, in a separate diagram.
And finally we have all the data types needed to define the Ecore API.
Clearly Ecore Tools is more than just adequate for my diagramming needs. Finally I can throw away my moldy old Rose 98! Unlike this peony rose, Rose 98 has definitely lost its luster.
So thanks Ecore Tools team, you're one of the brightest flowers in my garden! You've definitely been good to Gany and to me.
Well, it's time to tend my garden. Most of my 10,000+ seedlings still need to be planted, and it's a lovely day today.
Tuesday, June 17, 2008
Rules: Making, Breaking, and Following Them
Does it ever strike you that it seems like Eclipse simply has too many rules? Recall the dialog between the Emperor of Vienna and Mozart when the Emperor states to Mozart after a performance that “There are too many notes” to which Mozart responds “Too many notes? There are just as many notes as are required, neither more nor less.” The Emperor persists with the comment “There are simply too many notes. Just cut a few and it’ll be perfect!” This leads Mozart to ask “Which few did you have in mind majesty?”
As I prepare for a trip to Denver, where the august board of directors may well be tempted to make yet more rules, we really do need to take pause for a moment and consider if Eclipse's symphony of rules contains any sour notes. Do we really have only as many rules as required, neither more nor less? I rather doubt it. Watching the progress of 237412 makes me wonder about the actual value of the rule about incubation, and where exactly the dividing line is between the rule as is was written and the policy around its interpretation. It even makes me wonder about the actual intent of the rule in the first place. It's seems perhaps like a game of telephone where a little is lost and a little is added at each step from the conception of the rule to the final implementation of the rule.
I've not been on the board long enough to know the origin of all the rules, the incubation one being one of them. It probably sounded like a really good idea at the time. Similarly when the board decided that all projects should have a navigation link to a consistent collection of project metadata, it must have seemed like a smashing idea; consistency is like motherhood. Unfortunately, getting the Eclipse community to follow rules consistently is like herding ill-tempered cats. Our beloved Bjorn Freeman-Knuckles-Benson typically ends up with the unenviable task of enforcing those rules. But what can he actually do to encourage, and ultimately force, all projects to conform to this or any other board edict? Generally people will just ignore what they're being asked to do, and apparently the board members themselves don't actively participate in the process of encouraging their own employees to follow their own edicts. If you don't act, he may just have to replace your home page. Won't that make him popular?!
I'm going to cast a more critical eye on the making of new rules. I'll question whether it's just a good idea, or something truly required and hence something to be enforced. I'll question how it will be enforced and the penalties of non-compliance. I'll question how the affected parties will be informed about the new rules. I'll question the cost of complying to the new rules. And I'll question the cost of writing them, maintaining them, learning them, and enforcing them. I might even question a few of the rules we already have. I wonder if this will make me the most popular board member?
As I prepare for a trip to Denver, where the august board of directors may well be tempted to make yet more rules, we really do need to take pause for a moment and consider if Eclipse's symphony of rules contains any sour notes. Do we really have only as many rules as required, neither more nor less? I rather doubt it. Watching the progress of 237412 makes me wonder about the actual value of the rule about incubation, and where exactly the dividing line is between the rule as is was written and the policy around its interpretation. It even makes me wonder about the actual intent of the rule in the first place. It's seems perhaps like a game of telephone where a little is lost and a little is added at each step from the conception of the rule to the final implementation of the rule.
I've not been on the board long enough to know the origin of all the rules, the incubation one being one of them. It probably sounded like a really good idea at the time. Similarly when the board decided that all projects should have a navigation link to a consistent collection of project metadata, it must have seemed like a smashing idea; consistency is like motherhood. Unfortunately, getting the Eclipse community to follow rules consistently is like herding ill-tempered cats. Our beloved Bjorn Freeman-Knuckles-Benson typically ends up with the unenviable task of enforcing those rules. But what can he actually do to encourage, and ultimately force, all projects to conform to this or any other board edict? Generally people will just ignore what they're being asked to do, and apparently the board members themselves don't actively participate in the process of encouraging their own employees to follow their own edicts. If you don't act, he may just have to replace your home page. Won't that make him popular?!
I'm going to cast a more critical eye on the making of new rules. I'll question whether it's just a good idea, or something truly required and hence something to be enforced. I'll question how it will be enforced and the penalties of non-compliance. I'll question how the affected parties will be informed about the new rules. I'll question the cost of complying to the new rules. And I'll question the cost of writing them, maintaining them, learning them, and enforcing them. I might even question a few of the rules we already have. I wonder if this will make me the most popular board member?
Friday, June 13, 2008
The End is Nigh!
Did you ever feel like you could almost hear the clock ticking? Although you know the end is nigh, time just seems to slow right down, unless of course your build craps out, and then it starts to fly again, just ask Oisin. But I digress as I am wont to do. I know for a fact that the end is nigh, so pay attention!
Good, now that I've got your attention---I was starting to feel like Cassandra for a moment there---let me clarify because it's not as if it's the end of the world or anything (although something bit me in the garden and like Nick's cut, it's getting puffy). We all know that all good things must come to an end; although it's not so clear if the "good" qualifier is intentional with the implication that bad things might go on forever. I sure hope not! In any case, Ganymede is definitely a very good thing and it's definitely quickly coming to an end. Why it seems like just yesterday when we started. Not only does time fly, but thyme flowers too.
Sorry if I startled you for a moment there. I know how closely people hang off my every word so predictions of doom might well spread real panic. Of course all you avid readers realize that if I could foresee truly unexpected things, I'd be investing in the stock market not writing silly blogs wouldn't I?
Good, now that I've got your attention---I was starting to feel like Cassandra for a moment there---let me clarify because it's not as if it's the end of the world or anything (although something bit me in the garden and like Nick's cut, it's getting puffy). We all know that all good things must come to an end; although it's not so clear if the "good" qualifier is intentional with the implication that bad things might go on forever. I sure hope not! In any case, Ganymede is definitely a very good thing and it's definitely quickly coming to an end. Why it seems like just yesterday when we started. Not only does time fly, but thyme flowers too.
Sorry if I startled you for a moment there. I know how closely people hang off my every word so predictions of doom might well spread real panic. Of course all you avid readers realize that if I could foresee truly unexpected things, I'd be investing in the stock market not writing silly blogs wouldn't I?
Tuesday, June 10, 2008
How Confident Are You?
Many of us have an obsession in life that drives us with a passion. For me that passion is modeling. For Ruby, that passion is keeping the yard free of undesirables. Ducks are not allowed on our pond, because they're too messy. Ruby takes it upon herself to enforce that rule.
My father has been working on measurement technology in the mining industry for as long as I can remember, so that's his passion. During my recent trip to Vancouver, I helped him set up his own blog as a way to express his views about the state of affairs in that field. As you can tell, if you have a read, he's been rather frustrated with the way that self proclaimed geostatisticians continue to dominate this field with questionable practices. They're his messy ducks that just don't belong on the pond. It's not so hard to chase ducks off a small pond...
But it's very annoying when they simply go and land in a larger pond where they're harder to reach!
My father has been chasing his ducks for many years and they're in a really big pond. The gist of the issue is very simple. Consider that when you're given opinion poll data you're typically told, it's within + or - x% 19 times out of 20. I'm not sure how many of you folks have taken university statistics, but if you have, you'll recognize that as the 95% confidence interval. I.e., if you did the poll an infinite number of times, you'd expect 95% of those polls would produce a result that falls within that interval. It answers the question, how confident are you about your measurement? Isn't it nice that we can quantify the confidence of an opinion poll?
So now ask yourself this. If I'm going to go out to determine how much gold is in some patch of the earth that I would like to profitably mine, wouldn't a quantifiable measurement of confidence be a good thing? For example, suppose we'd need to mine 1 tonne of gold to turn a healthy profit, and the measurements all come back indicating that there are an estimated 1.2 tonnes of gold. Building the mine would seem like a fine idea. But if we're also told that it's within + or - 0.6 tonnes 19 times out of 20, we'd suddenly realize there is a significant risk that we might go broke. And of course in the mining industry, do mines ever produce more gold than what was originally estimated? That's like a software project that delivers early! So why is it that we can quantify the confidence of the numbers produced by an opinion poll, but we can't do the same for something that has such an incredible economic impact?
Most folks probably won't remember the Bre-X scandal but I remember it well. I remember my father phoning me one weekend telling me that he'd been given some mining data to analyze and when he assessed the numbers, he was absolutely sure that those numbers could only be produced by someone salting the samples, i.e., sprinkling gold into them to inflate the results. As it turned out, this was the start of the Bre-X scandal. He figured he'd finally chased those darned ducks off the pond!
No longer could they just continue to play games with numbers to produce overly optimistic results without the proper checks and balances to ensure the numbers actually correspond to physical reality in the ground. It's extremely important to test for spatial dependencies, i.e., to ensure that the variations of the measurements over the physical space in which the measurements occur are correlated in a statistically significant way. Taking those steps would be like a pond without ducks and would be so truly satisfying.
But my father's ducks are really ornery compared to Ruby's ducks, and because they're in such a big pond, his ducks head for deeper water and form little groups that try to ignore him. You can imagine that it's quite convenient to be able to play games with your numbers as a way to manipulate investors. This might all not seem terribly relevant to you personally, but ask yourself, where are your retirement savings invested? How confident are you?
My father has been working on measurement technology in the mining industry for as long as I can remember, so that's his passion. During my recent trip to Vancouver, I helped him set up his own blog as a way to express his views about the state of affairs in that field. As you can tell, if you have a read, he's been rather frustrated with the way that self proclaimed geostatisticians continue to dominate this field with questionable practices. They're his messy ducks that just don't belong on the pond. It's not so hard to chase ducks off a small pond...
But it's very annoying when they simply go and land in a larger pond where they're harder to reach!
My father has been chasing his ducks for many years and they're in a really big pond. The gist of the issue is very simple. Consider that when you're given opinion poll data you're typically told, it's within + or - x% 19 times out of 20. I'm not sure how many of you folks have taken university statistics, but if you have, you'll recognize that as the 95% confidence interval. I.e., if you did the poll an infinite number of times, you'd expect 95% of those polls would produce a result that falls within that interval. It answers the question, how confident are you about your measurement? Isn't it nice that we can quantify the confidence of an opinion poll?
So now ask yourself this. If I'm going to go out to determine how much gold is in some patch of the earth that I would like to profitably mine, wouldn't a quantifiable measurement of confidence be a good thing? For example, suppose we'd need to mine 1 tonne of gold to turn a healthy profit, and the measurements all come back indicating that there are an estimated 1.2 tonnes of gold. Building the mine would seem like a fine idea. But if we're also told that it's within + or - 0.6 tonnes 19 times out of 20, we'd suddenly realize there is a significant risk that we might go broke. And of course in the mining industry, do mines ever produce more gold than what was originally estimated? That's like a software project that delivers early! So why is it that we can quantify the confidence of the numbers produced by an opinion poll, but we can't do the same for something that has such an incredible economic impact?
Most folks probably won't remember the Bre-X scandal but I remember it well. I remember my father phoning me one weekend telling me that he'd been given some mining data to analyze and when he assessed the numbers, he was absolutely sure that those numbers could only be produced by someone salting the samples, i.e., sprinkling gold into them to inflate the results. As it turned out, this was the start of the Bre-X scandal. He figured he'd finally chased those darned ducks off the pond!
No longer could they just continue to play games with numbers to produce overly optimistic results without the proper checks and balances to ensure the numbers actually correspond to physical reality in the ground. It's extremely important to test for spatial dependencies, i.e., to ensure that the variations of the measurements over the physical space in which the measurements occur are correlated in a statistically significant way. Taking those steps would be like a pond without ducks and would be so truly satisfying.
But my father's ducks are really ornery compared to Ruby's ducks, and because they're in such a big pond, his ducks head for deeper water and form little groups that try to ignore him. You can imagine that it's quite convenient to be able to play games with your numbers as a way to manipulate investors. This might all not seem terribly relevant to you personally, but ask yourself, where are your retirement savings invested? How confident are you?
Subscribe to:
Posts (Atom)