Partial Classes and Inheritance

Since VS2005 created the ability to use partial classes, the feature has been available to developers across the .NET community.  This post is to provide a quick description of using inheritance and an example I found it to be particularly useful.

According to Microsoft, partial classes are most useful in these scenarios:

  • When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.
  • When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when creating Windows Forms, Web Service wrapper code, and so on. You can create code that uses these classes without having to edit the file created by Visual Studio.

So by excluding bullet point #1 because I don’t agree with it (any source control system, even VSS, should facilitate this), we’re pretty much left with code generation.  The main reason partial classes were created is in order to allow part of a class to be regenerated via CodeGen and not have that blow away other code you have added since generation.

I recently found this helpful in working with an industry standard XML definition called WITSML.  We wanted to have C# data objects defined by the XML schema provided by the standard.  However, we also wanted to have these objects decorated with certain features that allowed the objects to fit into our overall data integration architecture.  In order to accomplish this, we simple created a matching partial class definition where we inherited from our base class and added any additional functions we wanted to have in the item.  I would provide some code samples, but this walks the line on some intellectual property issues since the concept belongs to my employer, so I’ll leave it up to your imagination  🙂

An interesting facet of the inheritance capability and the differences between C# and VB is discussed by Cory Smith on his blog.

Advertisements

JSON Lives

It has always been interesting to me how long it can take a technology to become truly rooted in the minds and toolsets of the software development community.  This mandatory waiting period for technological stability seems to be just long enough that most of the fads fall through the cracks.  Many tech buzzwords get thrown around, and there is no shortage of hype.  Web 2.0, AJAX, WCF, WF, Ruby on Rails, NUnit, TDD, Scrum, Agile, Pair Programming… If you take all of the topics of PWOP productions over the past year, I see a bunch of stuff that is undeniably cool, but typically not what I use in my day-to-day work.  With a few notable exceptions, a majority of the newer technologies have not made it into corporate IT departments just yet.  Actuarial types like to see the riff-raff settle out before they build their infrastructure on top of it.

Most of what you see fly though your RSS reader or pop up on your iPod are curious glances at interesting ideas that will ultimately be a blip on the radar screen of 21st century computing.  To provide an example: I am a huge fan of CruiseControl.NET, but I believe it’s days are numbered.  It is only a matter of time before Microsoft refines TFS, integrates it into Visual Studio, and makes it cheap enough that you’d have to be a zealot to ignore it’s utility.  Does anybody remember Netscape?

Now I am not trying to make a point that Microsoft eventually gobbles everything up, and they are masterfully elegant at copying other people’s great ideas.  Everyone already knows that.  My point is that it is possible to expend a whole bunch of brain cycles on stuff that does nothing more than… take up your brain cycles.  So how do you pick the winners?  How do you know what to learn today so that 2 years down the road you will be the expert in the technology that is threaded through the fabric of all things software?  How could I have known 10 years ago that time would be better spent learning XML than COM?

I don’t know.

But what I do know is that I like the simplicity and the light natured format of JSON (JavaScript Object Notation).  It seems like something that satisfies the requirements of what 95% of XML is used for, without the verbosity.  The simplicity of a URL querysting like name value pair with just enough governance and convention to provide some stability.

JSON is essentially made up of objects, arrays, and values.  Values can come in any of the following flavors:

  • string
  • number
  • object
  • array
  • true
  • false
  • null

So you are probably starting to see the recursive nature in that values can be objects and arrays.

var tacos = {
"breakfast":
[ {"meat":"sausage","filler":"egg","quantity":1}

,{"meat":"bacon","filler":"egg","quantity":1}]

"dinner" :
[ {"meat":"beef","filler":"beans","quantity":2}

,�à {"meat":"chicken","filler":"lettuce","quantity":1}]
}

Although there are several ways to describe this in XML, the most common would probably be

<tacos>

<breakfast>

<meat>sausage</meat><filler>egg</filler><quantity>1</quantity>

<meat>bacon</meat><filler>egg</filler><quantity>1</quantity>

</breakfast>

<dinner>

<meat>beef</meat><filler>beans</filler><quantity>2</quantity>

<meat>chicken</meat><filler>lettuce</filler><quantity>1</quantity>

</dinner>

</tacos>

Of course there are various ways to represent this in XML, but the example above shows how XML can be quite a bit more verbose than JSON.  The JSON example has 223 characters while the XML example has 304.  That is about a 25% savings by using the JSON data format.  Because of the importance of quick response times for background http requests, JSON has taken off in the AJAX community more quickly than in other circles (which is ironic because the X in AJAX means XML :)).

So is JSON worth spending time on?  Will it develop schema support as robust as XML and develop into the next superstar that shows up as the format for configuration files in Visual Studio 2010?

What do you think?

Data in the Cloud? Not For Me…

As I was cheerfully driving into work and auditorily inhaling my daily dose of .NET Rocks, I heard a rather compelling discussion between Scott Cate and Carl Franklin/Richard Campbell.

Entry page for Shoope

The site mentioned in the podcast was called Shoope, which I have to say is kind of a stupid name.  Partially, I may be thinking that name sucks because of an annoying little catchy phrase I can’t get out of my head.  Since I have tried to register for the beta unsuccessfully about 10 times over a span of 10 hours to try out the mystical data store, my mind keeps heckling, “Shoope is Poop!”  The logo only bolsters this unfair moniker by having the “p” deficate some mysterious pile of goo.  What’s up with that?

Now I am pretty sure that this is not the case due to the caliber of people involved on the project.  Scott Cate and Rob Howard seem to be very intelligent guys, and I am sure they have teamed up with some other really talented developers for this new concept.  So why do I get a SQL timeout every time I try to see what they’re up to?  Is it the overwhelming popularity of the announcement on .NET Rocks?  Cate opened up the beta testing to the first 5000 listeners that registered with code “DNR” on the website. 

So what is Shoope?  It is an online data store that put’s your data “in the cloud” for access and sharing anywhere.  They provide a set of dynamic services that allow you to access, modify, and search across your data.  I haven’t been able to find out much else online as things seem to be just out of the gate with the beta testing.  My efforts to try out the beta will continue as traffic hopefully dies down, but I haven’t been able to determine much else about it at this point. 

I can appreciate the volume of traffic that anything mentioned as free on DNR would generate.  After all, I don’t even have a use case for Couldn’t get in over a span of 10 hoursShoope I just wanted to see what it was all about.  Nonetheless, I was underwhelmed by the fact that they didn’t either anticipate that volume of traffic or even handle it once the site started cratering.  The thought of throwing your data out “into the cloud” is a discomforting one, and this doesn’t do much in the way of building warm fuzzies.

Scott Hanselman Endorses Visual Basic?

Auditorily enjoying my morning dose of podcasts, I found myself listening to Hanselminutes yesterday, specifically the show on LINQ to XML.  Now it seems that Carl Franklin has always been a great host of the show, but is it just me or has the dynamic changed now that Scott Hanselman has gone to work for Microsoft?  I digress…Visual Basic - A legend in it’s own right

At any rate, my point is that Scott gave some interesting props to Visual Basic having been a long-time C# proponent.  So now I have a dilemma: Do I buy into his rationale or is he just selling the kool-aid that he has drank?  Why the sudden change of heart.  At least he still used the term “silly” to describe VB 🙂

Recently, I have gained a strong  appreciation for simplicity in auxiliary tools.  In essence, providing idiot proof ways to do simple things.  In a full scale enterprise application, you are always going to have a high degree of complexity, but what about those little helper tools that surround the core application?  Are these candidates for VB?  Some of the syntactical points Hanselman made combined with some of the new features presented in .NET 2.0 for VB may be pretty cool.  I have no idea because I haven’t worked with true Visual Basic since using VB4 with my Dad on his electronic endcap idea in 1995.  I always have had an affinity for the simplicity of classic ASP, even if it does tempt you to write crappy code.

So the next time you need to spin up that test app to help with some random task, which project type in Visual Studio are you gonna choose?