Friday, March 21, 2008

EclipseCon: Thursday

Today's keynote speaker was Cory Doctorow. He pointed out that predicting the future is based on extrapolating the present forward, and that it tends to overlook surprises. He believes we're evolving toward an information economy. Controlling information has a long history, starting with notions such as copyright law. Despite expectations to the contrary, off-shoring our manufacturing doesn't guarantee that developing nations will respect other arrangements like copyright and intellectual property laws. Part of the problem is that it's just too easy to copy things and its getting easier and easier. The best way to deal with this is just to out-compete those who might distribute the information being copied.

He believes that, in general, success at any firm is largely determined by collaboration, i.e., getting everyone on the same page. Execution relies on everyone driving momentum in the same direction. It's getting easier in the information age to drive collaboration, with Wikipedia being a case in point. The links we create in our webpages are in effect a collaboration effort that's become bread and butter for Google.

Many endeavors have a floor on their cost that's difficult to undercut. For example, an institution that makes movies for 100 million, you won't be geared to create a movie for 20 thousand dollars. If you can drive the cost of producing something down to commodity prices, you'll be in a position to respond quickly to changes in a dynamic economy. For example, the legal costs for enforcing copyrights and other laws for publishing are a major cost for many firms, yet if you tried to apply those to YouTube, you'd not be able to provide the service.

Strict liability laws implied that anyone participating in a chain of events that led to a copyright violation could be sued. But it just can't be applied to a blogger website in a reasonable way. So it was replaced on the internet by "notice and takedown." I.e., you must report a violation with the expectation that the information will be made unavailable as a result. This protection from liability is what has enabled blog hosting and things like YouTube, but some are beginning to argue that these kinds of things are a loophole in the copyright laws.

Some statistics show that the majority of the blogs produced are read by less than four people, which makes me kind of sad. This blog is really long and I bet less than four people will read it all. Some might argue this represents a failure of this form of media, but Cory argues it demonstrates that producing the information is so cheap that even at this scale, it's still an effective overall ecosystem; my time is pretty valuable to me, so I hope blogging this way is an effective way to spend it.

The key in the information economy is to find ways to lower the costs to levels far below anything that might ever have been anticipated. Some will argue that freedom of information is at odds with property rights. But this can be taken too far. Is every idea someone's property and hence no idea can be reused without permission? Does a photo of a building violate the property rights of the architect of it?

Enforcing external policy on a local computer in an effort to prevent property violations (copying) is an example of taking this too far. It gives rise to a mechanism that can be exploited for malicious purposes. It's like having a self destruct mechanism on a spaceship that could be subverted.

We need to keep the cost of collaborating on software low to ensure that open source remains an effective force. Open source helps to demonstrate that value is derived from sharing information rather than from hording it. He points out that limiting encryption to 50 bits was a very silly idea. It allowed criminal forces to subvert security.

His final point was that freedom on the internet is an extremely important principle and that we all have a responsibility to help preserve the freedoms we might take for granted today. They didn't just happen. People had to fight for getting them and continue to fight to preserve them.

During questions, he talked about the patent crisis. He argued that the US patent office is effectively abdicating its responsibility by granting patents for pretty much everything. Clearly his not a big fan of software patents. Venture capitalists like patents because they can often be traded and hence represent a form of currency. He argues that mostly it just helps to employ the lawyers. And worse yet, there are now trolls, who simply make threats of litigation to extort money from their victims. It's cheaper to just pay them than to hire a lawyer to deal with them, and thereby the victims, both small and large, fund the malicious behavior.

What should we do to prevent terrorism? He argues that all the information for how to make deadly poisons and bombs is available anyway. Security is not really sufficient. Assuming something is secure is just assuming no one is smarter than you. Something is only secure if you allow everyone to hack on it and thereby demonstrate security when no one succeeds in breaking it.

After the keynote, a day of OMG fun began. Just to be clear, OMG stands for Object Management Group, not Oh My God that's complicated. Kenn, acting as the chair, gave a little bit of background about open specifications and open source.

Eclipse has a long list of projects that implement OMG specifications. Such reference implementations can be shared to reduce costs and to ensure more rapid adoption. Part of the process for approving an OMG specification involves the existence of implementations that validate the integrity of the specification's design, so Eclipse definitely helps with that. However, there is insufficient overlap between the people implementing the specifications and those involved at the OMG with defining them. To improve the situation in general, we need to make both the specifications more consumable and the implementations more conforming.

Michael Soden of ikv came to talk about the fact that OMG specifications really do need Eclipse open source implementations.

His company is small and is located in Berlin and Yokahama. It provides model driven development solutions and services. Eclipse could really help guarantee compatibility by acting as a point of comparison against other implementations They have implementations of many of OMG's specifications. For them, open source implementations save them money and ensures higher quality because of wide reuse and testing. It helps guarantee interoperability by providing a consistent interpretation via a concrete realization that can be directly tested. Inconsistencies in the specification can be ironed out as part of realizing the implementation. Keeping up-to-date is crucial.

Some issues he listed at the end include: EReference/EAttribute verses Property; FeatureMaps are not accessible to other specifications; generics are specific to Ecore; MDT OCL doesn't support navigation to the eClass/getMetaClass; and a lack of four layers of meta (the meaning of which I never got a chance to ask about). Lots of issues with the QVT specification where highlighted, which is a relatively new specification. In the end, there was a question of how to align OMG's processes with Eclipse's processes. Joint participation, collaboration as part of finalization, joint reviews, and issue reporting to improve the feedback loop where suggested.

Next came Pete Rivett of Adaptive, a company focused on repository software, to discuss the challenges of CMOF.

He talked about the history of MOF's (Meta Object Facility) versions and it's relation to XMI (XML Metadata Interchange). He feels that the JCP's JMI process was perhaps a failed exercise. It's gradually evolved, as part of MOF2, to separate out different levels of concerns, e.g., EMOF (Essential MOF) verses CMOF (which stands for Complete MOF, not crappy MOF nor complex MOF). The issue of associations was raised again, as it was at the BoF the other day. Things provided by CMOF include associations, association generalization, property subsetting, property redefinition, derived unions, and package merge. A long list of OMG specifications are CMOF based so a lack of CMOF support is hampering these specifications. Mapping between CMOF and UML is relatively straightforward, but less so for mapping between EMOF and CMOF. A CMOF Java API would be useful, including support for generating and implementing the APIs. Support for more facilities needed by repositories are seen as important too. Better integration with constraint support. Long term things are like semantic MOF where objects can change class and have multiple classes will become important. (What the heck are objects with changing classes anyway? I didn't get a chance to ask.)

Next came David Sciamma of Anyware Technologies, a company specialized in open source and focused on model driven development.

Anyware is a member of the TOPCASED consortium, which aims to provide an open source critical systems development environment with support for meta modeling. Their primary need is something pragmatic and for that they need productive tools that let users work at a high level, i.e., tools for manipulating metamodels. They also want to support integration with other technologies. Ecore Tools provides an Ecore perspective that facilitates first class support for Ecore as the focus of interest . Navigation, searching, and analysis of references along with a type hierarchy view are important features. Better integration with the code generator is key as well, i.e, more transparent access to the GenModel.

Anyware provides an enhanced GMF editor with usability improvements for common tasks. They've also taken an interest in Emfatic, a human readable notation for Ecore, and have offered to help Miguel and Chris drive it forward. He gave a quick demo where he showed that two opposite references are rendered as a single line. I was like totally cool dude, who says Ecore doesn't have associations. Oops, did I say that out loud? Apparently I did. They've even added support for filtering so you don't have to show all references in a given diagram. All I could think was goodbye moldy old *.mdl files and hello integrated Eclipse graphical editing. Woo hoo!

Next came Ed Willink of Thales:

He believes that EMOF is somewhat poorly specified at the OMG. Even the namespace URI for it is obscure, and where the primitive types live seems left hanging. (It's no wonder that I can't really expect any tool to ever provide a meaningful EMOF serialization and it makes me wonder, where the heck is EMOF.emof and why didn't they notice that it's data types are AWOL?) Other glaring holes, like the fact that OCL extends EMOF but OCL's Real type is not defined anywhere stand out. Also, QVT provides non-normative Rose and Ecore models.

A specific issue he brought up is the fact that labeled non-navigable association ends are meaningless in EMOF and hence the information they specify (which is needed in QVT/OCL expressions) disappear. Ecore could accommodate this by annotating a EReference to specify the name of the non-existent opposite. (I think that's a good idea.) He believes that lack of extents seems is an issue. (I recall I deleted those when working on Ecore because no one used them nor could explain a purpose that wasn't already served by Resources.) He exploits the fact that Ecore can be serialized as EMOF XMI and that it can be read from it. In this way, Ecore acts as the Java realization for EMOF. To support a direct EMOF model, adapters are used to view each Ecore instance as its corresponding EMOF instance (which is is similar to how SDO is supported). His design supports kind of a dual serialization for the both forms of the model. He's looking forward to exploiting EMF's new content type support to simplified resource handling.

It was finally time for the first of the discussion periods. Kenn stated that there was a concesus at the modeling BoF about the need for a richer set of modeling concepts as provided in CMOF. I contradicted him since it seemed to me only a small number of people actually stated they commonly needed such concepts (but I suppose that subset of folks has a consensus among them). Then we discussed copyright issues around OMG specifications and their normative models. After all, a normative model is intellectual property and when you pump that through the generator, the result is still the property of the model's originator. So it would be good for the OMG to make it clear more that the models can be used freely. There was discussion about who can be involved in the specification and how to raise issues when you aren't a member or when you can't vote. The OMG process just isn't very open according to some. Issues of what tools folks can use to help work with the OMG's models is an issue.

We talked about at length, but I got kind of bored of it and tuned out thinking like jeesh dudes, just use a darned Eclipse tool the stone you're using now has worn out. I'd rather folks argue about which tool to use at your OMG meetings given that the best we can do at Eclipse is lead the horse to water. I know I should have been more open minded somehow, but I was getting kind of hungry because it was lunch time.

After lunch, James Bruck of IBM discussed his experience with working on UML2 and how issues discovered by folks implementing the specification are raised with the OMG.

Some short comings from a Java 5.0 support perspective are things like support for Java-style generics (which was added to Ecore in 2.3). UML's template mechanism is a poor match for Java-style generics. So UML adopted a profiling mechanism to allow such things to be specified. He discussed some specification problems. For example, ambiguity with ball and socket notation. There's also often ambiguity in wording that leaves things far too open to multiple interpretations and lack of motivation about the intent of a design makes these difficult to resolve. Also, sometimes derived attributes aren't marked as such. These issues can all be harvested from reference implementations that are based on a normative description and we could ensure that normative description determines the textual descriptions in the body of the specification. Another problem is that changes between versions causes confusion, especially when they are binary incompatible changes. There really ought to be better change tracking between versions, with a rationale for why it's been changed. Better tracking of issues perhaps similar to bugzilla would be great.

Dave Carlson presented his work on a UML profile for XML Schema.

He believes that profiling is generally an art and there is little documentation in the way of best practices. Schemas are often the normative description of his customer's data. Round trip mapping between UML2 and XML Schema is an important goal. (Though I had to wonder how UML multiple inheritance can be round tripped in an XML schema that supports only single inheritance.) Given there are default mappings, one can avoid applying a profile only when the mapping is different from the default so that can help reduce noise. Modularity between schemas needs to be reflected in the UML modules. He showed a demo of his tool in action. It all looked quite powerful. (But I could have asked questions for an hour and didn't have the opportunity. It seems to me that a fundamental problem with what I saw was that if you took his UML and mapped it to Ecore, serialized an instance of that Ecore, it wouldn't serialize anything like the schema the UML mapped to. So in a sense, it seems he's just using UML as a notation for expressing XML Schema, rather than as a model for specifying instances that conform to that schema. I know I'm missing part of the puzzle.)

Next, Raphael Faudou of Atos Origin talked about their use of UML, SysML, ADDL, and SAM.

Each was modeled with Ecore and the corresponding TOPCASED editors were mostly generated from those Ecore models. There is growing interest in SysML, but it remains to decide if it should be realized as a UML profile, a meta model inheriting from UML, or as a new autonomous meta model. It's clear the relationship between SysML and UML needs to be clarified.

Victor Sanchez of Open Canarias discussed their problems implementing QVTo.

He made it clear that he believes that a number of things are missing from the specification. There seem to be some mismatch even between model levels as well as some type inconsistencies. He described the architecture of their solution, which involves an Atomic Transformation Code layer over top of a Virtual Transformation Engine that in turn is propped up by EMF and Java. They're collaborating with Ed Willink on QVTr and QVTc. (I really need to learn more about QVT and what all these different relational, operational, and core aspects of them mean.)

Jonathan Musset of Obeo is working on QVTr and just recently on MTL (MOF to Text Language).

Obeo has eight committers and is involved in M2M, M2T, and EMFT compare as well as in STP SCA. Depending on whether you're generating model verses textual output makes a different in how your solution is expressed and hence there are specialized languages for each. MTL will be an interesting contribution to the M2T space where neither xPand nor JET are based on specifications. (So many people bashed EMF's old an rotten JET templates at EclipseCon. It's like they were conspiring.) Issues like working with models that are in the workspace verses those that are in the target or running platform are important resolve. (The topic of the BoF on Monday.) He points out that it's a challenge to feed back changes into the specifications. QVT and MTL need to have compatible libraries and currently don't. Inconsistency and ambiguities in the specifications are also a problem for these guys. Many examples were given.

Ed Willink was back to talk about getting Eclipse's OCL working with QVT.

He described the various interesting editors to support the textual notations for these languages as well as the support for integrating those with graphical and outline views. Integrating the various diagrams and views into a single editor is important. He reminds use that the mismatch between EMOF and Ecore hasn't helped him. He has fixed the QT.mdl to correct blatant errors. He's looked closely at Essential OCL and expressed appreciation for Christian Damus' help with resolving OCL issues.

We didn't have all that much time for discussion before I had to make concluding remarks.

There was some talk about needing some type of pathmap mechanism in EMF, which frustrated me a lot because pathmaps in UML are implemented by EMF's URI mapping support so the building blocks people need to do what they want to do already exist. (And please don't complain about documentation, just help write some after I help answer your questions!) As part of closing, I expressed my view that the OMG needs to be more focused on helping developers realize their specifications in concrete implementations. This will make their work more relevant.

We developers want to be implementing well-defined specifications that will support interchange and make it easier for our work to be adopted. I was a little frustrated that we really spent most of our time talking about issues that I recorded like a secretary without sufficient time for interesting discussions. I feel like we resolved very little, though it was an interesting meeting and we do have a lot better understanding of the issues as a result of the session.

I showed up late for the closing panel and as I sat down, Mike interrupted the proceedings to announce that "Ed Merks has entered the room." Jeesh, can't a guy sneak in late! I think they were into predictions for the future of the community at the time, and when they got to Rich, he predicted that by 2010, "Ed Merks will have posted his millionth newsgroup posting" and ironically I was doing a compulsive newsgroup check at the time. I multi-task a lot; no newsgroup question went unanswered during EclipseCon.

The final closing statistic for the closing panel, as you can see below was that I posted 54 photos (not counting this blog of course).

I hope some people have lived vicariously through the various blogs this week to get a sense of the excitement and atmosphere at EclipseCon. It's truly must-not-miss event that gets better every year. It might sound sappy, but it really is like a big family reunion in so many ways.

I tried to get Lynn to beat up Wassim, but I think his hulking size intimidated her so she put on a sweet face.

My flight left an hour late due to the "technical problems" that so frequently plague all our travels. But it gave me time to upload all the photos into the blog, while chatting with Pascal, so I could spend my flight fixing up all the wording in this posting to turn it into a literary work of art. I'm about to land in Vancouver to visit my mom, dad, brother, and sister for the long weekend. I'm flying back to Toronto on Monday.

I hope everyone else had as much fun as I did, I hope everyone has a safe flight home, I hope I will see you all again next year, and, if you didn't come this year, I hope to see you for the first time year, or at Eclipse Summit Europe, another great event which I now look forward to.

I want to thank all the people in the community for letting me feel like a top ambassador and all the commiters for letting me represent them at the board. It is and honor and a pleasure.

Thursday, March 20, 2008

EclipseCon: Wednesday

Sam Ramji announced project Supernova, a plan to buy the Eclipse foundation in order to acquire all the EPL software. A release for Visual Studio under EPL would follow soon. Why? Because Microsoft simply loves Java. Too funny.

He outlined Microsofts dabblings in open source since 2001. His open source lab was founded in 2004. He described Microsoft's interest in high performance dynamic languages. He also talked about Microsoft's OSI approved open source licenses and some of the technologies made available under those licenses. A focus on interoperability is a growing interest for Microsoft, e.g., the Linux interoperability lab. They've been working with Mozilla on Firefox optimization on Vista including improved support for the embedded Windows media player. And they've been working with folks at Apache, developer-to-developer, on http interoperability. MySQL, PHP, and Samba are also of interest to Microsoft.

So how will Eclipse and Microsoft work together? Higgins and Cardspace are focused on identity management. It's an important enabler for secure access in an electronic world. SWT support for Windows Presentation Foundation is something Microsoft is committing resource towards enabling. Our beloved Steve came up on stage to demo some of the problems on WPF that need some attention. Simple problems with colors or more annoying ones like window flicker during resize. He showed funky things like rainbow menus, spinning buttons and view, and jittering buttons followed by a quick tour of some horrible code (but since most of Steve's code is just so nice, it was hard to find any).

Mike came up to ask some hard questions. When will Microsoft join? Not yet. Does a C# IDE at Eclipse make sense? Sort of but no planned activity. Will Microsoft improve interoperability with Java? Certainly Microsoft is interested in being a platform on which Java applications can run. Will Microsoft have committers at Eclipse? Mostly they just want to help commiters rather than be committers. What is budget of open source lab relative to 50 billion dollars revenue? It currently has a 5 million dollar budget and that will hopefully grow. What factors have been important to get buy in on open source initiatives? Interoperability is the primary driving factor, i.e., making Microsoft's platform a platform of choice. Why did Microsoft define new licenses instead of using an existing one? The issues around intellectual property clarity, i.e., understandability to techical folks, were key concerns.

In the end, I think it was a fairly dry talk that didn't say many things that really gave me a warm fuzzy feeling. It was gracious of Sam to come speak to an audience that was likely to be skeptical.

After that, it was time for the e4 presentation, which attracted nearly the same number of people as the keynote. Nothing like anticipation of bloodsport to stir up interest. McQ reflected on why it's been hard to get the community involved. It's complex code that has grown baroque over time so it's hard to make changes without negative impact. Looking forward, we need to understand how will Eclipse respond to the growth of rich web-based user interfaces and REST communication to keep itself current and relevant. By doing cool new things, it's hoped that the community will get more involved and in fact that involvement will drive what actually gets done. Certainly there is a desire to make architectural improvements as well as the ability to target the web. He demoed the ability to alter the look of the IDE using a CSS-style model that controls the visual aspects.

Jochen talked about some of the cool things that have been done with RAP to target the web. Effectively the server is running the actual application and it's just being viewed by the client through their web browser. Many folks perceive there to be a mismatch between a widget-based API and what's actually available in a browser. RAP is a server-based design while folks like Steve have been experimenting with a client-based design, more like Google Widget Toolkit, where the UI really is running on the client. Each type of approach has different scalability issues. How many applications can you run on one server or how many widgets can you support running in a client's browser?

This was followed by dynamic dude Jeff. He explained that the 3.x stream will continue for many years to come, regardless of the 4.0 effort The established community need not worry about disruption, and successful 4.0 will feed back into the 3.x stream when possible. The different parts in e4 will move at their own pace with the expectation that they will be driven forward by various teams. The outlook for development is a two year plan with a checkpoint review next year. An e4 Summit is anticipated soon and I'm sure the e4 BoF tonight will be a good one.

After that I decided to attend some short talks, since there was quite a bit of modeling content. Kenn gave us a very quick helping of MDT acronym soup.

Steve Robenbalt presented on building an RCP application with EMF. He explained that the runtime was very useful and support for XMI serialization was key. It supported rapid prototyping. In his opinion, it's easy to get started with EMF because it has good tutorials to help get your feet wet. His RCP application comprised five models: facility recipe, process, operations, and visualization. He was challenged by incomplete initial designs, short deadlines, frequent focus changes, and remote firewalled access. EMF's support for rapid prototyping helped a great deal. Customers actually edit the serialized XMI instances to effect changes.

Jim van Dam presneted StUIML; it sounds similar to what Yves was doing. It lets you describe your user interface with a high level abstraction that can then be targeted to difference environments. You then bind your data model to that UI description. It's totally cool. He mentioned how much happier he was with xPand templates than with JET templates. That should increase the oAW pressures to use xPand in EMF's generator.

Hajo Eichler talked about ikv++'s interest in QVT. He demoed the editor support for helping you write your implementations. They support cool things like a debugger to help understand the behavior of your transformation as it executes; he showed that in action. Very cool!

Then David Sciamma presented the Ecore Tools project. The goals are to support a graphical editor for Ecore with interesting views in addition to a class diagram. They also want to provide integration with things like search and with the generator model. An improved outline, a richer properties view, and filtering are among the capabilities that are included. They're interested in Emfatic support (but Miguel's work on Emfatic seems to have stalled). He gave a very nice demo. It's awesome and has improved a lot since last I had time to look at it. I'll need to find some time again. It would be good to throw away my moldy old Rose 98 stuff.

I managed to convince Jim van Dam to come and have lunch with me. Mike Gerring, who has been helpful writing EMF recipes as a reward for my helping him joined as well. Jim is interested in writing a project proposal for his work. I encouraged him to hook of with Yves, because their work is very similar but also complimentary. I'm very encouraged. He also showed me more details about EMFTrans after lunch.

My effort to build up the modeling community one committer at a time was followed by a Ian's panel, Building Open Source Communities: One Contributor at a Time. How timely is that? The panel was fun and I hope the audience got some value from it.

After this it was time for my partner in crime, Rich Gronback, to give his talk about Eclipse Modeling Project as a DSL Toolkit. DSL stands for Domain Specific Language, for those people whose lives are sadly devoid of models. Of course no modeling project talk is complete without a healthy helping of acronym soup. Very hearty and nourishing! Rich outlined the components of the swiss army knife used to develop a DSL toolkit. Again, xPand gets a plug in terms of its superiority to JET. It's cool to know that it's possible to take the XHTML schema and convert it to an Ecore model that will allow you to support model to model transformation to generate HMTL. It was also cool to see a whole cross section of technology from the modeling project in action.

After that Rich and I got a demo of some of the cool stuff the Thales guys were doing. It was very slick though quite similar to the model workflow engine work, so we need folks to chat about the potential for collaboration.

This was followed by a demo from the Skyway folks. Notice that's Eric Rizzo on the far right. I'd wondered why he was asking EMF questions on the newsgroup. Now I know. Jared explained to me the design ideas and showed me a demo of their slick new software which they recently open sourced. They're very interested in what it would take to host it at Eclipse.

They're making heavy use of EMF and are effectively emulating much of the design behind Ecore and the generator for it, only they have their own DSL rather than Ecore and are generating code to support a web application specified using that DSL. They've really tapped into the potential of model driven development. They rearchitected their whole product in just six months with seven people using EMF and made comments like, we saved 500,000 lines of code by using EMF. And now they are exploiting the same principles so that their users can build specialized web applications far more quickly. They're a snappy bunch of guys with some great ideas and high energy. I love to see successful clients.

On the way down to the reception where they had food, drinks, and chocolate fountains, all free, I ran into Michael and Joel.

At the reception I chatted with Scott Schneider and Eric.

Until this mad woman showed up and beat Eric senseless.

On the way to the e4 BoF, I ran into Simon Moser, who I've helped on the newsgroups; I hadn't met him in person before.

Then Oisin gave me one of his patented looks.

The first half of the BoF was interesting but focused to a large extent on the need for backward compatibility. I think many folks don't have a good sense of the impact that breaking changes would have on Eclipse's huge ecosystem.

After that I went to Robert Fuhrer's the IMP BoF. IMP can generate a JDT-style IDE from a grammar and has a lot of potential application in the DSL space.

Finally I headed back to the bar, where the drinks were free thanks to Mike. I found Kenn at a table with the Topcased guys.

Given the drinks were free, there were many happy people, including Chris, Kevin, Lynn, Carlos, Andrew, and Jeff.

It was cool to meet Andrew Overholt in person. He lives in Toronto.

And of course Tom is one of my favorite community members.

Such a long day! Way past time for bed...

Wednesday, March 19, 2008

EclipseCon: Tuesday

The morning started a little late, having forgotten to actually turn on my alarm clock after carefully setting the wakeup time. Who would have thought I could shower, shave, and dress in just 10 minutes! In hindsight, it worked out okay to have gotten an extra hour of sleep; all these blogs take a long time to finish before bed. After sprinting down to the conference hall, I made it in time to Dan Lyon's Fake Steve Jobs talk.

He explained why he decided to do this blog. Reason one was boredom. He was tired of writing about IBM and EMC. Boring, he exclaimed! The second and most important reason was fear. The print media business is being disrupted by the internet. So what to do about it? He wrote his "Attack of the Blogs" article as part of his denial process. Finally he gave in and looked into getting involved in the internet half of his company. Rejection for being "boring and dumb" prompted him to try his hat a blogging. He was interested in Jonathan Swartz' blog and figured, hey, what if I pretended to be an important company executive on a blogging binge?

Why Steve Jobs though? Primarily because he was struck by the strangeness of Apple's culture, or rather the Cult of Apple. Their main religious gathering being Macworld was where his parodies started. He actually admires Steve as a fascinating personality with depth. The fact that Steve's also a weird, uptight narcissist makes for excellent parody fodder. After writing for a couple of weeks, he was shocked to find how much interest there was in his blog. Word of mouth on the internet spreads very quickly. Of course he was still anonymous at this point. Now how to make money from it?

The mystery of who he was helped generate interest. As a result, he was finally offered a column to write for How ironic is that, given he was working for Forbes? There's a lesson here for big corporations around the world: remember to check under you nose. Finally he was outed, which cut back some of his audience, but also grew a new one.

So the question remains: Why does it work? Part of the reason is audience participation. Anyone can easily and quickly comment on the content to provide feedback in real time. The ideas someone suggests can easily affect the blog content for the next day. Anyone can even have a dialog via the blog c0mments. It let him combine his technology reporting skills with the ability to comment quickly, in a humorous way without the usual delays. He characterized his blogs as gross, silly, with occasional eruptions of seriousness. He can twist the story and get a message across in a way he can't via a serious story. I.e., he can tell the truths he can never tell otherwise. His various image-based parodies were just hilarious.

After that I attended the Artem's "GMF: Exemplary MDA" talk. The sound technicians showed up late to turn on the microphone so that was frustrating for poor Artem. He described the techniques used by GMF to build their generation frameworks. He pointed out that EMF's JET templates have grown quite complex which makes them difficult to read. They're also quite difficult to extend, unlike XPand which has more flexibility via its aspect oriented support. Support for merging generated code with an existing target is important for producing a flexible architecture that allow models to be changed and regenerated.

After that I had a chat with Yves of Soyatec about his technology for mapping Ecore models onto a forms-based UI model. It's cool and he'll likely write an EMFT project proposal for it soon. Then I had lunch at the Modeling table with a very friendly crowd. And after lunch I went to Kenn's talk Open Source Meets Open Specifications: Eclipse and the OMG.

I was overhead explaining that no, OMG doesn't stand for Oh My God that's complicated, and then my words were taken out of context and thrown back at me. Kenn outlined the kind of modeling technology available at Eclipse and related them to specifications from the OMG. Kenn's a good speaker. Here he is with Xavier and Yves of Soyatec.

After that I went to Eike Stepper's talk Connected Data Objects - The EMF Model Repository. He explained some of the basic aspects of EMF's resource framework, the underlying mechanism that CDO extends, and talked about some of the limitations of XML, e.g., most implementations will load the entire XML tree in its entirety and then keep in all in memory. There's also no direct transactional support nor fine grained locking. In contrast, a database as a backing store has interesting advantages. CDO provides a distributed shared model where the server logically maintains a huge object graph and each client can work with a direct view of a subset of that model. Then he showed in a demo how to tailor a generator model to generate CDO compatible implementation classes and how to configure the server communication and repository. His demo went on to show the technology in action. There were lots of questions which I think reflected a great deal of interest. Over all, it's very cool and he's a great speaker.

After that I had a nice chat with Sven and Wolfgang about oAW and about what Itemis is up to.

Time flies when you're having fun. Soon it was time for Chris and I to do our unrehearsed talk. It was a small audience, but I'm told it went well. Even Eugene and Wassim seemed to enjoy it, and they're tough customers to please! Eugene didn't even complain about all the colorful pictures in the slides. There were lots of questions from the audience and some good two way discussions.

What I thought was incredibly nice was that after the talk, Bruce from LeapFrog and Tamar from Cisco but formerly from LeapFrog came up and gave me a gift of chocolate along with a card to express their gratitude for all the help I provided them on the EMF newsgroup.

Of course the gift was very cool, but as I reread the comments in the card now, I'm so gratified to know that timely help does indeed have such a positive impact on EMF's users.

After that it was time for the reception. Free food and drinks! Donald was shocked to get an invite to a NetBeans party.

I kid you not. There were apparently some NetBeans fans with sad empty little lives who needed to live vicariously off the Eclipse crowd.

Not many people came to the "meet the board members" table, i.e., no one came, but it was a good place to hang out. There was a little PDE mini gathering of Brian and Chris.

It was also a staging area for modeling people anticipating the Mega Modeling Mania BoF. Here are Jean, Paul, Pierre, Bernd, and Ed (who if you look closely is where a no whining ribbon).

I had a little chat with Nicolaus who had attended Chris and my diversity talk. He's hoping to get involved in contributing code by helping provide patches to fix bugs and wanted some advice.

The Eclipse community has so many incredibly nice people. It's not only kind of heartwarming but also highly motivating for me to keep doing more of the same. I like playing the role of ambassador; though I'm not always so politically correct, as soon became apparent at the Mega Modeling Mania BoF. There were more than sixty people, so it took three photos to capture them all.

We had a lively discussion as I managed to stir simple differences of opinion into open hostility. Just kidding! It was indeed a lively discussion though. Such a diverse group. Some folks are definitely focused on the most abstract concepts that underly modeling while others care most about something that's grounded firmly in a pragmatic Java-based implementation. And even that characterization stirred up discussion about dynamic EMF models not really being Java-based because they aren't generated in Java; I tired to explain that of course it's still Java-based, using things like Java strings, integers, and so on, but sometimes it's best to leave well enough alone.

The issue of associations reared its head, a precursor of Thursday's OMG discussions I'm sure. I tried to express my perspective that typically associations are just pointers in an object oriented language and that if they are more than this, they becomes objects in their own right, with pointers to the objects being associated. But again, I find not everyone likes to think in the same concrete terms as I do. Oh well, I swore I wouldn't leave the room with any work items like yesterday's BoF and was successful in that. On the way to the next BoF, I got the person who was asking me to add new things to talk to the person asking me to remove existing things hoping they would cancel each other out...

The UML Tools BoF had quite nice turnout as well.

The folks form Anyware want to get involved to contribute some of the cool stuff from Topcased and Papyrus.

We need to talk some more tomorrow about how best to enable this new group of commiters without impacting the UML Tools project's release schedule.

I was pretty tired after this, but decided to head down to the bar with a few remaining modelers. When I got there, I spotted Andy and Nicolaus in the bar and Andy gave me a quick Glimmer demo. It's cool!

Mariot and Jonathan were nice enough to buy a round of drinks for a bunch of us.

We were all pretty tired, so soon we were off to bed, or in my case to write the second half of this very long blog. I sure hope someone is reading it! It's a lot of work but I'm sure will help me preserve my fond memories of the week's events.