OK 
 
 
    Start | MVPs 


        MVP Weblogs


door:

It’s likely you’ve heard about Microsoft’s release of the .NET Core source code, their announcement of ASP.NET vNext and accompanying PR talk. I’d like to point to two great articles first which analyze these bits without being under the influence of some sort of cool-aid: “.NET Core: Hype vs. Reality” by Chris Nahr and “.NET Core The Details - Is It Enough?” by Mike James.

I don’t have a problem with the fact that the ASP.NET team wants to do something about the performance of ASP.NET today and the big pile of APIs they created during the past 12-13 years. However I do have a problem with the following:

“We think of .NET Core as not being specific to either .NET Native nor ASP.NET 5 – the BCL and the runtimes are general purpose and designed to be modular. As such, it forms the foundation for all future .NET verticals.”

The quote above is from Immo Landwerth’s post I linked above. The premise is very simple, yet has far reaching consequences: .NET core is the future of .NET. Search for ‘Future’ in the article and you’ll see more reference to this remark besides the aforementioned quote. Please pay extra attention to the last sentence: “As such, it forms the foundation for all future .NET verticals”. The article is written by a PM, a person who’s paid to write articles like this, so I can only assume what’s written there has been eyeballed by more than one person and can be assumed to be true.

The simple question that popped up in my mind when I read about ‘.NET core is the future’, is: “if .NET core is the future of all .NET stacks, what is going to happen with .NET full and the APIs in .NET full?”

Simple question, with a set of simple answers:

  • Either .NET Core + new framework libs will get enough body and it will be simply called ‘.NET’ and what’s left is send off to bit heaven, so stuff that’s not ported to .NET core nor the new framework libs is simply ‘legacy’ and effectively dead.
  • Or .NET Core + new framework libs will form a separate stack besides .NET full and will co-exist like there’s a stack for Store apps, for Phone etc.

Of course there’s also the possibility that .NET core will follow the faith of Dynamic Data, Webforms, WCF Ria Services and WCF Data Services, to name a few of the many dead and burned frameworks and features originating from the ASP.NET team, but let’s ignore that for a second.

For 3rd party developers like myself who provide class libraries and frameworks to be used in .NET apps, it’s crucial to know which one of the above answers will become reality: if .NET core + new framework libs is the future, sooner or later all 3rd party library developers have to port their code over and rule of thumb is: the sooner you do that, the better. If .NET core + new framework libs will form a separate stack, it’s an optional choice and therefore might not be a profitable one. After all the amount of people, time and money we can spend on porting code to ‘yet another platform/framework’, is rather limited if we compare it to a large corporation like Microsoft.

Porting a large framework to .NET Core, how high is the price to pay?

For my company, I develop an entity modeling system and O/R mapper for .NET: LLBLGen Pro. It’s a commercial toolkit that’s been on the market for over 12 years now, and I’ve seen my fair share of frameworks and systems come out of Microsoft which were positioned as essential for the .NET developer at that moment and crucial for the future.  .NET Core is the base for ASP.NET vNext and positioned to be the future of .NET and applications on .NET Core / ASP.NET vNext will likely use data-access to use some sort of database. This means that my runtime (the LLBLGen Pro runtime framework, which is our ORM framework) should be present on .NET core. 

Our runtime isn’t small, it spans over 500,000 lines of code and has a lot of functionality, not all of which is considered ‘modern’ but not all of us develop new software: most developers out there actually do maintenance work on software which will likely be used in production for years to come. This means that what’s provided as functionality today will be required tomorrow as well. Add to that that a lot of our users write desktop applications and our framework therefore has to work on .NET full no matter what. This has the side effect that what’s in our runtime will have to stay there for a long period of time, and porting it to .NET core will effectively mean: create a fork of it for a new runtime and maintain them in parallel.

I’ve done this before, for the Compact Framework, a limited .NET framework that ran on Windows Mobile and other limited devices, so I know what costs come with a port like this:

  • research in what is not supported, which API acts differently what limitations there are and which quirks / bugs to stay away from or take into account
  • features in the .NET framework aren’t there, so you have to work around these or provide your own implementation
  • APIs are different or lack overloads so you have to create conditional compile blocks using #if
  • because not everything is possible on a limited framework you have to cut features in your own framework, limiting usability
  • less features or limited features in your own work mean you have to provide different documentation for these features to explain the differences
  • a different platform requires additional tests to make sure what changed actually works
  • additional maintenance costs for support, as issues only occurring with the additional framework require specific setups for reproducing the issue
  • supporting a new platform isn’t for a week but for a long period of time as customers take a dependency on your work for a long period of time.

For an ISV or for an OSS team these issues have a severe impact: they take time to resolve and time has a cost: you can’t spend the time on something else. In short: it’s a serious investment.

I’m not afraid to do these kind of investments. In the past I’ve spent time on things like the following: (time is full time, just development work)

  • Several months implementing DataSource controls for our runtime to be used in ASP.NET webforms. Dead: ASP.NET vNext doesn’t contain webforms support anymore. We still ship the DataSource controls though.
  • Several months on adding support for Dynamic Data in our runtime. Dead. We don’t ship support for it anymore. Customers who want it can get the source if they want to from the website.
  • Several months on adding support for WCF Ria Services in our runtime. Dead. We don’t ship support for it anymore. Customers who want it can get the source if they want to from the website.
  • Several months on adding support for WCF Data Services in our runtime. Dead, as the future is in WebAPI, which is now merged into ASP.NET vNext. We still ship the library.
  • Five months on adding support for Compact Framework. Dead. We don’t ship support for it anymore. Last version which did is from 2008.
  • Two months on adding support for XML serialization. Dead. JSon is now what’s to be used instead. We still ship full xml serialization support with multiple formats.
  • One month on adding support for Binary Serialization. Dead. JSon is now what’s to be used instead. We still ship full binary serialization support with optimized pipeline for fast and compact binary serialization of entity graphs.
  • Several weeks on adding support for WCF services in our runtime. Dead, as the future is WebAPI, which is now merged into ASP.NET vNext. We still ship support for it.
  • Several months on adding support for complex binding in Winforms and WPF: Still alive, but future is unclear (see below). We ship full support for it, including entity view classes.
  • Almost a full year on adding support for Linq in our runtime. Still alive. This was a horrible year but in the end it was worth it.
  • One month on adding a full async/await API to our runtime. Still alive. This was actually quite fun.

That’s just the runtime, and the changes required to ‘stay accurate’ according to Microsoft’s roadmap for .NET and what’s required to build a ‘modern’ application for .NET. As you can see, lots of time spend on stuff that’s considered ‘dead’ today but was very accurate at that moment or looked to be like it would become great soon.

One can also imagine that with the experience from the past, I’m a bit reluctant with respect to supporting new stuff nowadays, see it as a case of “fool me 10 times, shame on me”, or something like that. At the same time, things change, and if .NET core is the future for both server and desktop, we have to abandon the current .NET framework and its features anyway in the future, so moving is inevitable then. So what’s one more investment?

It’s not a simple investment

It’s not as simple as ‘one more investment, what harm can it do?’. The thing is that with a small ISV as we are, it’s crucial you spend your time on the things that matter: if things fail, it might be fatal to the company. This is different from a team within Microsoft which still get a paycheck after a failed project: they move on to the next project, or even get a chance to rewrite everything from scratch. So from the perspective of a Microsoft employee, it might look like something that might take a month or two and then you’re all set for ‘the future’, and if everything fails, well, we’ll all have a laugh and a beer and move on, right?

No.

When you write software for Microsoft platforms you’ll pick up a thing or two after a while and you’ll begin to see a pattern: Within Microsoft there are a lot of different teams, all trying to get the OK from upper management to keep doing what they’re doing. The numbers are so vast that it’s often the case teams are not really working together but actually against each other, even without knowing it, simply because they have their own agendas and goals and they’re only known within these teams. All these teams produce stuff, new technology to both gain users and the attention of upper management. Some of these technologies stick around and gain traction, others fail and die off. It can be that the decision of one team might affect the future of another, but that’s part of the game: in the end it will all sort out, perhaps both will stay, both will die, upper management will step in and demand the teams talk.

We 3rd party developers look at what’s produced by all these teams and hope to bet on the technologies that stick around. Chances are (see above) that you’re betting on a crippled horse with one lung, and your investment is rendered void after a period of time. It’s therefore crucial to know up front as much as possible before taking the plunge and hope for the best.

With the investment to support .NET Core and ASP.NET vNext in our runtime this isn’t different: I want to know up front why I am doing this, why this is the best investment for my time, time I can’t spend on new features for my customers. I don’t think that’s an unreasonable question.

“Sell me this framework”

So I want Microsoft to sell it to me. Not with PR bullshit and hype nonsense, but with arguments that actually mean something. I want them to sell me their vision of the future, why I have to make the investment they ask from me. “Sell me this pen”, says Jordan Belfort in ‘The Wolf of Wall Street’, while holding up a basic ballpoint pen. It’s from one the many brilliant scenes in that wonderful movie which shows how hard it actually is to sell something which seems trivial but isn’t. Microsoft acts with their communication about .NET core as the room full of sales people in the last scene of The Wolf of Wall Street but they have to act like Brad who picks up the pen and says “I want you to write your name on a napkin” to which Jordan replies “But I don’t have a pen”. “Exactly, buy one?”.

It comes down to which future they mean with ‘.NET Core is the future’, and whose future that is. Will my customers who write desktop applications using Winforms or WPF be part of that future? Or will ASP.NET users only be part of that future? It’s very vague and unclear and with that uncertain. There’s contradicting information provided both through official channels and through unofficial channels (e.g. email) that paints the picture of the Microsoft we have all known for many years: a group of teams all trying to do their best, providing value for what their team stands for and we outsiders have to make sense out of the often contradicting visions ourselves.

My wife said last night: “They don’t want us there, all they want is stuff they control themselves”. I fear she’s right (as always); I have never felt more unwelcome in the world of .NET as today.

Our future

So I decided to make my own future and see where that gets me. This means I’ll spend time I otherwise would spend on a .NET core port on new features for our customers and will take a ‘wait-and-see’-stance with .NET core. After all, our customers had and have confidence in what we provide is solid enough for their future and that’s what matters to me, not necessarily what’s best for Microsoft’s future.


comments 12/9/2014 11:53:00 AM

door: Jan Karel Pieterse
Finally got some time to include an Excel 2007-2013 version of this little tool that traces where you went in your Excel files and gets you back easily!
comments 11/19/2014 7:45:00 AM

door: Jan Karel Pieterse
RefTreeAnalyser 2.0 has just been updated. I have added an option to report all unique formulas in your workbook. Ever had to work out the logic of other people's Excel files? Ever had to untie the spaghetti-knots of a large Excel workbook's formulas? Then you know what a nightmare this can be! Now there is the RefTreeAnalyser! With this tool, finding out how a cell in a workbook derives its results and what other cells depend on the cell is a breeze.
comments 10/23/2014 6:45:00 PM

door: Jan Karel Pieterse
Every quarter Microsoft announces who are the lucky ones to receive their Most Valuable Professional Award. I got re-awarded! To celebrate that I am offering a 3 day 50 percent discount on RefTreeAnalyser. From October 8, 2014 to October 10, 2104 you receive 50 % off of the list price when you enter this coupon code: MVP2014 Head over to my website now and download the tool, you can try it for free!
comments 10/7/2014 11:30:00 AM

door: Jan Karel Pieterse
Onze cursus Excel VBA voor Financials blijkt populari. Haast u als u mee wilt doen! Een twee-daagse Excel VBA cursus (19 november en 3 december 2014). Bespaar tijd door het automatiseren van uw rapportages! Ontsluier de geheimen van VBA en breng uw Excel kennis en vaardigheden op ongekende hoogte !
comments 10/2/2014 6:00:00 PM

door:

This morning I read the blog post 'Life with a .NET' by Jon Wear. It's about leaving .NET / the Microsoft platform for the great unknown 'outside the Microsoft world'-universe, and it's a great read.

It made me reflect on my (rather secret) journey in that same universe outside everything Microsoft this summer. After I finished LLBLGen Pro v4.2 this summer, I fell into the usual 'post-project' dip, where everything feels 'meh' and uninteresting. Needless to say I was completely empty and after 12-13 years of doing nothing but .NET / C# / ORM development, I didn't see myself continuing on this path. However I also didn't see myself leaving it, for the obvious reason that it's the place where my life's work lives.

Rock, meet Hard Place.

This summer I took a journey to find back the love I once had in writing code, starting with Go and Linux, after that Objective-C, Mac OS X / Cocoa and coming back full circle on .NET with the Task Parallel Library (TPL). It's an unusual trip, but I didn't really know what I was looking for, what it was that I needed to be happy to write some code again, so I was open to anything.

In my career I've learned (and luckily also forgotten) a lot of different programming languages and platforms. After 20 years of professionally using all those languages and platforms for short or longer periods of time I can conclude: they all are just a tool to get to your goal, they're not the actual goal themselves.

I already knew this of course when I went into this journey, so learning Go was, in hindsight, more of a 'let's do this, see where it leads me' kind of thing than a real move to Go. After learning the language and working with the tools available I realized it wasn't the world I wanted to be in. The main reason was that I develop and sell tools for a living, I'm not a contractor and Go's commercial ecosystem is simply not really there. After my Go adventure I had learned a new language but nothing of what I needed to get past my problem.

To learn a language and platform, it's best to use it in a real project. Some time ago I had an idea for an app for musicians (I'm an amateur guitarist) on OS X. This was the perfect opportunity to learn a new language and platform, so I did the radical move to learn Objective-C with XCode, targeting OS X. I have to say, this was a true struggle. XCode was 'OK', but Objective-C was something I hated from the start.

Yeah, I know, Xamarin etc., but I didn't want to use that, it would still be C# and I did that already all day long. I know Apple has released a new language (Swift), but at the time I sank my teeth into Objective-C it was still in beta and I thought it would be a good idea to learn Objective-C to understand the platform better anyway.

Besides the Objective-C syntax (oh boy, who cooked that up) I was also faced with a rather unfamiliar framework: Cocoa. Though after some reading, Cocoa looked like the similar frameworks we have on Windows and Linux/X, but one thing stood out: its dispatch queues and Grand Central Dispatch. For my app idea I needed a lot of parallel processing and the queues made this easy, as in: you could think about parallel work in a naturally way: this is a piece of work I want to run in parallel with what you're running already, and take care of the rest for me, including work stealing, scheduling, the works.

It matches what modern 3D engines do on multicore CPU/GPUs: chop up the work in small chunks and schedule those on the available cores. Suddenly I got new ideas how to do things in parallel in my designer and more importantly, do things in real time. To do that, I needed dispatch queues on .NET. I realized that .NET has this already in the form of the Task Platform Library (TPL), since .NET 4.0. Strange how things like that are completely uninteresting till you realize their real power and from then on they're that new toy you can't put down.

My little journey brought me back to .NET without realizing it, to find back the love of writing code by finding motivation in an element that's a core part of an OS I don't use in my daily work. It opened the route out of the rabbit hole by showing a new path I could take without leaving my life's work behind; on the contrary: it opened my eyes to completely new opportunities and ideas. I would love to share those, but I don't want to give my competitors ideas, sorry Winking smile

I wanted to write this post to show you that moving away from .NET might look like the solution, but chances are .NET / the Microsoft platform isn't the real problem, it's between your ears: languages, platforms, tool chains, IDEs, editors, shells etc., they're all just means to get somewhere. Using one over the other doesn't change a single thing about where you have to go, only perhaps the way you get there. While a new route to get to your goal will offer new scenery, the road likely is as bumpy as the one you're used to and the scenery will not be that special after you've seen it a lot of times.

Safe travels.


comments 9/23/2014 1:20:09 PM

door: Jan Karel Pieterse
Excel VBA voor Financials; Een twee-daagse Excel VBA cursus (19 november en 3 december 2014). Bespaar tijd door het automatiseren van uw rapportages! Ontsluier de geheimen van VBA en breng uw Excel kennis en vaardigheden op ongekende hoogte !
comments 9/8/2014 5:35:00 PM

door:

This is a reply to "What ORMs have taught me: just learn SQL" by Geoff Wozniak.

I've spent the last 12 years of my life full time writing ORMs and entity modeling systems, so I think I know a thing or two about this topic. I'll briefly address some of the things mentioned in the article.

Reading the article I got the feeling Geoff didn't truly understood the material, what ORMs are meant for and what they're not meant for. It's not the first time I've seen an article like this and I'm convinced it's not the last. That's fine; you'll find a lot of these kind of articles on many frameworks/paradigms/languages etc. in our field. I'd like to add that I don't know Geoff and therefore have to base my conclusions on the article alone.

Re: intro

The reference to the Neward article made me chuckle: sorry to say it but bringing that up always gives me the notion one has little knowledge of what an ORM does and what it doesn't do. An ORM is just a tool to translate between two projections of the same abstract entity model (class and table, which result in instances: object and table row); it doesn't magically make your crappy DB look like one designed by CELKO himself nor does it magically make your 12 level deep, 10K object wide graph persist to tables in a millisecond as if there was just 1 table. Neither will SQL for that matter, but Geoff (and Neward before him) silently ignores that.

An ORM consists of two parts: a low level system which translates between class instances and table rows to transport the entity instances (== the data) back and forth, and a series of sub-systems on top of that to provide entity services (validation, graph persistence, unit of work, lazy / eager loading etc. etc.)

It is not some sort of 'magic connector' which eats object graphs and takes care of transforming those to tabular data of some sort with which you don't want to know anything about. It also isn't a 'magic connector' which reads your insanely crappy relational model into a dense object graph as if you read the objects from memory.

Re: Attribute Creep

He mentions attribute creep (more and more attributes (==columns) per relation (==table)) and FKs in the same section, however I don't think one is related to the other. Having wide tables is a problem but it's a problem regardless of what you're using as a query system. Writing projections on top of an entity model is easy, if your ORM allows you to, but even if it doesn't, the wide tables are a problem of the way the database is set up: they'll be a problem in SQL as well as an ORM.

What struck me as odd was that he has wide tables and also a problem with a lot of joins which sounds like he either has a highly normalized model, which should have resulted to narrow tables, or uses deep inheritance hierarchies. Nevertheless, if a projection requires 14 joins, it requires 14 joins: the data itself isn't obtainable in any other way otherwise it would be doable through the ORM as well (as any major ORM allows you to write a custom projection with joins etc. to obtain the data, materialized in instances of the class type you provide). It's hard to ignore the fact the author might have overlooked easy to use features (which hibernate provides) to overcome the problems he ran into and at the same time it's a bit odd a highly normalized model is the problem of the ORM and won't be a problem when using SQL (which has to work with the same normalized tables)

He says:

Attribute creep and excessive use of foreign keys shows me is that in order to use ORMs effectively, you still need to know SQL. My contention with ORMs is that, if you need to know SQL, just use SQL since it prevents the need to know how non-SQL gets translated to SQL.

I agree with the fact that you still need to know SQL, as you need to formulate the queries in your code in such a way that it leads to more efficient SQL; an ORM can do a bit of optimization but it is almost impossible to do without statistics/data (which are not available at that stage). But you can't conclude from that to 'just use SQL', as that's like recommending to learn to write Java Bytecode because the syntax of Clojure is too hard to grasp. A better conclusion would be to learn the query system better so you can predict the SQL which will be produced.

Re: Data Retrieval

Query performance is always a concern, and anything between code and the actual execution of the DML in the DB is overhead. Hand-optimized SQL might be a good option in some areas, but in the majority of cases queries generated by ORMs are fine, even hibernate's ;). Most ORMs have a query language / system which is derived from SQL to begin with (the mentioned hibernate does: HQL) and it is predictable what SQL it will roughly produce.

Sure, if you create deep inheritance hierarchies over your tables, you might run into a lot of joins, but that's known up front: inheritance isn't free, one knows what it will do at runtime. "Know the tool you're working with". If Geoff was surprised to see a lot of joins because a 14-entity deep inheritance hierarchy was pulled from the DB, he should have known better.

He says:

From what I've seen, unless you have a really simple data model (that is, you never do joins), you will be bending over backwards to figure out how to get an ORM to generate SQL that runs efficiently. Most of the time, it's more obfuscated than actual SQL.

I find this hard to believe with the query systems I've seen and written myself, with one exception: Linq. Linq is a bit different because it has constructs (like GroupBy) which are different in Linq/code than they are in the DB which require a translation of intend from the query to SQL and thus can / will lead to a SQL query which might not be what one would expect when reading the Linq query.

The usage of Window functions and other DB specific features (like query hints) might be something not doable in an ORM query language. There are several solutions to that though, one being creating DB functions which are mapped to code methods so you can execute the constructs inside your query using those methods which will result in using the functions in the SQL query, another being DB Views. They both require actions inside the RDBMS which is less ideal, but if it helps in edge cases, why not? They're equal to adding an index to some fields to speed up data retrieval, or creating a denormalized table because the data is read-only anyway and it saves the system using it a lot of joins.

Re: Dual schema dangers

Here I saw the struggle Geoff had with the concept of ORMs. This isn't uncommon, e.g. Neward (in my opinion) expresses the same struggle in his cited essay. There are two sides with a gap between them: Classes and Table definitions. If you start with classes and try to create table definitions from these it's equal as starting with the table definitions and try to create classes from these: both are the projection result of an abstract entity model and to get one from the other requires reverse engineering the side you start with to the abstract entity model it was the projection of and then projecting that to the side you want to create: starting from classes or table definitions doesn't matter.

I do understand the pain point when you start with either side and have to bridge the gap to the other side: without the abstract entity model as the one true source of truth, it's always a problem when one side changes to update the other side.

Geoff tries to blame this on the ORM but that's not really fair: the ORM is meant to work with both sides (class and tables) at run time, not at design time; it requires a system meant for modeling an abstract entity model to manage both sides, as both sides are the result of that model, not the source of it. (I wrote one, see 'Links to my work' at the top left. I didn't want to pollute this article with references to my work)

Re: Identities

Creating new entity instances which get their PK set by a sequence in the DB are the main cause of the problem if I understand Geoff's description correctly. In memory, these entities have no real ID and referring to them is a bit of a pain, true. But that's related to working with objects in general: any object created is either identified by some sort of ID you give it or its memory location ("the instance itself"). I don't get the struggle with the cache and partial commits: if you want to refer to objects in memory, it's equal to what you would do if they weren't persisted to a DB. That they get IDs in the DB in the case of sequenced PKs is not a problem: the objects get updated after the DB transaction completes. Even hibernate is capable of doing that.

Re: Transactions

This section is a typical description of what happens when you confuse a DB transaction with a business transaction. A business transaction can span more than one DB transaction, might involve several subsystems / services, might even use message queues, might even be parked for a period of time before commit. A DB transaction is more explicit and low-level: you start the transaction, you do work, you commit (or rollback) the transaction and that's it.

Geoffs reference to scope is good, it illustrates that there's a difference between the two and therefore you shouldn't use a DB transaction when you need a business transaction. However it's too bad he misses this himself. Often developers try to implement a business transaction at the level of an ORM by using its unit of work, but it's too low level for that: a business transaction might span several systems and an ORM isn't the right system to control such a transaction; it's meant to control one DB transaction, that's it.

That doesn't mean the ORM shouldn't provide the tools to help a developer write proper business transaction code with the systems controlling the business transaction. After all, the second part of an ORM is 'entity services' and one being 'Unit of work'. Most ORMs follow the Ambler paper and combine a Unit of Work with their central Session or Context object. This leads to the problem that you can't offer a Unit of Work without the central Session or Context object and thus when you actually want a Unit of Work to pass around, collecting work for (a part of) the business transaction, you don't want to deal with a Session / Context object which also controls the DB connection / transaction; it might be that at that level / scope it's not even allowed / possible to do DB oriented work.

It's therefore essential to have an ORM which offers a separate Unit of Work object, which solves this problem. Additionally to that, the developer has to be aware that a business transaction is more than just a DB transaction and should design the code accordingly.

Re: Where do I see myself going

A highly normalized relational model (4+ normal form) which is used to retrieve denormalized sets is not likely to perform well (as the chance of a high number of joins in most queries is significant), no matter what query system you're using. I get the feeling parts of what Geoff ran into is caused by reporting requirements (which often requires denormalized sets of (aggregated) data), parts are caused by inheritance hierarchies (not mentioned but according to the # of joins which were unexpected I think this is the case) and partly caused by poorly designed relational models.

None of those are solved magically if you use SQL instead of HQL or whatever query language you're using in an ORM. Not only is 'SQL' a query language and not a query system, it also doesn't make the core problems go away. Well, perhaps the inheritance one as you can't have inheritance in SQL, but then again, you're not forced to use inheritance in your entity model either.

He says:

By moving away from thinking of the objects in my application as something to be stored in a database (the raison d'être for ORMs) and instead thinking of the database as a (large and complex) data type, I've found working with a database from an application to be much simpler.

Here Geoff illustrates clearly a misconception about ORMs: they're not there to persist object graphs into some magic box in the corner, they're a system to move entity instances(==data) across the gap between two projections of the same abstract entity model. It's no surprise it turns out to be much simpler if you see your DB as part of your application, because it is part of your application. If we ignore the difference in level of abstraction, it's equal to talk to a DB through a REST service as it is to talk to a DB through an ORM which provides you with data: both times you go through an API to work with the entity instances on the other side. The REST service isn't a bucket you throw data in, and neither is the ORM.

Re: conclusion

SQL is a query language, not a query system. It's therefore not an alternative to the functionality provided by an ORM. ORMs make some things, namely the things they're built for, very easy. They make other things, namely the things they're not built for, hard. But the same can be said about any tool, including SQL (if we see a language as a tool): SQL is set oriented, and therefore imperative logic is hard to do, so one shouldn't do imperative logic in SQL. Blaming SQL for being crap in dealing with imperative logic doesn't make it so, it merely shows the person doing the blaming doesn't understand what SQL is meant to do and what it isn't meant to do.

In closing I'd like to not that what's ignored in the article is the optimized SQL ORMs generate with respect to e.g updates and graph fetches (eager loading). Left alone the fact that to execute the SQL query and consume the results, one has to write a system which is the core of any ORM: the low-level query execution system and object materializer.

It always pains me to read an article like Geoff's about a long struggle with ORMs as it's often based on a set of misconceptions what ORMs do and what they don't do. This is partly to blame on some ORM developers (let's not name names) themselves which try to sell the image that an ORM is a magic object graph persister and will turn your RDBMS into an object store. It's also party to blame on the complexity of the systems themselves: you don't simply learn how to use all of the ORM features and quirks overnight.

And sadly, it's also party to blame on the users, the developers using the ORMs, themselves. Suggesting a query language as the answer (and with that the tools that come with it) isn't going to solve anything: the root problem, working with relational data in an OO system, i.e. bridging the cap between class and table definition, still has to be solved, and using SQL and low-level systems to execute it will only move that problem onto your own plate, where you run the risk of re-inventing the wheel, albeit poorly.


comments 8/5/2014 12:53:54 PM

door:

In 2012, I thought it might be a good idea to register for a Windows Store Account, oh sorry, 'Windows Developer Services-account'. As you might recall, signing up was a bit of a pain. After a year, I decided to get rid of it as I didn't do anything with it nor did I expect to do anything with it in the future and as it costs money, I wanted to close the account. That too was a bit of a pain.

To sign up for a Windows Store Account/Windows Developer Services-account, Microsoft outsources the verification process to Symantec. The verification process is to make sure that the person who signed up (me) really works at company X (I even own it) and Symantec is seen by Microsoft to be up to the task to do that. As you can read in my sign-up blog post, the process includes Symantec contacting a person other than the person who registered for a company who also has to be entitled to make sure that I am who I am.

Is Symantec, a total different company than Microsoft, really up to the task? Well, let's see, shall we? As you can read above, I signed out of my Windows Store Account almost a year ago. One would think that by now Microsoft would have sent Symantec a memo in which they state that the individual 'Frans Bouma' is no longer a Windows Store developer card-carrier. In case they have (which I can't verify, pun intended), Symantec has a lousy way of keeping track, as last week my company received a lovely request from Symantec to verify with them whether 'Frans Bouma' was indeed working for my company and I was who I said I was. You know, for the Windows Developer Services account.

Now the following might read like I stepped into the oldest phishing trap in the book, but everything checked out properly, we use plain text email only, copied URLs over, the URLs were simple and legit.

We first thought it was spam/phishing so we ignored it. But this morning a new email arrived as a reminder. So we painstakingly went over every byte in the email and headers. Headers checked out (all routed through Verisign, now part of Symantec, and Symantec itself), URLs in the email checked out (we only look at plain text emails). The email was sent to the same person who verified me 2 years ago, and we concluded it must be legit. We had a good laugh about it, but what the heck, let's verify again. How would that work exactly, that verification process?

So we copied the url from the plain text version of the email (which was a simple url into Symantec) to a browser, it arrived at Symantec, listed info about my account, and all that's there to be done is click the verify button. It's laughably simple: just click a button! I do recall the first time it was a phone call, but instead of getting rid of this whole Symantec bullshit, Microsoft decided apparently that clicking a button instead is equal to 'making things simpler'.

After a couple of minutes, I received at my email box the email that cheered 'congratulations! I was re-verified and my Microsoft Developer Services account was renewed and I could keep developing apps for the windows store'.

But… I ended my account almost a year ago? Or did I? To verify whether I really got rid of this crap or not, I went to the sites I went before to register and end the account, but they only showed me XBox Live stuff, no developer account info.

Headers of reply email:

Received: from spooler by sd.nl (**********************); 29 Jul 2014 10:03:43 +0200
X-Envelope-To: frans********************
Received: from authmail1.verisign.com (69.58.183.55) by **********************
 (***********************) with Microsoft SMTP Server id 14.3.174.1; Tue, 29 Jul 2014
 10:06:08 +0200
Received: from smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com
 (smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com [10.244.24.61])	by
 authmail1.verisign.com (8.13.8/8.13.8) with ESMTP id s6T8674q001640
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)	for
 <frans@**********************>; Tue, 29 Jul 2014 08:06:07 GMT
Date: Tue, 29 Jul 2014 08:06:07 +0000
From: <microsoft.orders@symantec.com>
To: <frans@***********************>
Message-ID: <1717526233.2131406621167061.JavaMail.support@geotrust.com>
Subject: Informatie over Microsoft Developer Services-account **********************
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Loop-Check:
Return-Path: microsoft.orders@symantec.com
X-MS-Exchange-Organization-AuthSource: **********************
X-MS-Exchange-Organization-AuthAs: Anonymous
X-MS-Exchange-Organization-PRD: symantec.com
X-MS-Exchange-Organization-SenderIdResult: None
Received-SPF: None (**********************: microsoft.orders@symantec.com does not
 designate permitted sender hosts)
X-MS-Exchange-Organization-SCL: 0
X-MS-Exchange-Organization-PCL: 2
X-MS-Exchange-Organization-Antispam-Report: DV:3.3.13320.464;SID:SenderIDStatus None;OrigIP:69.58.183.55
X-MS-Exchange-Organization-AVStamp-Mailbox: MSFTFF;1;0;0 0 0
MIME-Version: 1.0

(replaced sensitive own info with ****)

I wonder: will Symantec for the rest of my life try to verify me as a Windows Store developer even though I have no longer a subscription on that service from Microsoft? The data in Symantec's databases about this account will likely never be purged unless they get rid of the account data from Microsoft entirely or I stop verifying (but even then).

In 2012 I already found it pretty bad that my account info with Microsoft was shared with another 3rd party, Symantec, and today I find it even worse: I no longer have a Windows Store dev account with Microsoft, but Symantec a) still thinks I do and b) keeps the information about me while I never had the intention to sign up with Symantec at all.

Microsoft will never attract large droves of devs writing apps for its Windows Store unless it makes the whole process seamless and without leaking sensitive information to 3rd party corporations who can do whatever they please with it.


comments 7/29/2014 11:03:39 AM

door:

We've released LLBLGen Pro v4.2 RTM! v4.2 is a free upgrade for all v4.x licensees and if you're on v3.x, you can upgrade with a discount.

For what's new, I'd like to refer to the what's new page on the LLBLGen Pro website. Smile


comments 7/2/2014 1:33:08 PM

door: Jan Karel Pieterse
RefTreeAnalyser 2.0 has just been updated. Improved performance of formula checking and added formula block highlighting. Ever had to work out the logic of other people's Excel files? Ever had to untie the spaghetti-knots of a large Excel workbook's formulas? Then you know what a nightmare this can be! Now there is the RefTreeAnalyser! With this tool, finding out how a cell in a workbook derives its results and what other cells depend on the cell is a breeze.
comments 6/20/2014 5:50:00 PM

door:

This morning we've released LLBLGen Pro v4.2 BETA! The beta is available to all v4 customers and can be downloaded from the customer area -> v4.2 section.

Below is the extensive list of new / changed features. Enjoy! Smile

LLBLGen Pro v4.2 beta, what's new / changed.

Main new features / changes

General

  • Allowed Action Combinations: Specify which actions are allowed on an entity instance: Any combination of Create/Read/Update/Delete.
    Supported on: LLBLGen Pro Runtime Framework (all combinations, R mandatory), NHibernate (CRUD and R).
    Action Combinations make it easy to define e.g. entities which can only be created or read but never updated nor deleted. The action combinations are defined at the mapping level and checked inside the runtime and are additional to the authorization framework.

Designer

  • Copy / Paste support for all model elements (entity, value type, typed list, typed view, table valued function call, stored procedure call):
    Paste full (with mappings and target tables) or just model elements, across instances (stand alone designer only) or within the project (VS.NET integration and standalone designer).
  • Automatic re-apply changed settings on existing project:
    e.g. changing a pattern for a name will reapply the setting on the existing model, making sure the names comply with the setting value.
  • New name patterns for auto-created FK/UC/PK constraints (model first).
    This makes it possible to define a naming pattern for e.g. FK constraints other than the default FK_{guid}. You can use macros to make sure the FK name reflects e.g. the fields and the tables it is referencing.
  • It's now possible to save search queries in the project file.
  • Ability to define default constraints for types, per type - DB combination (model first).
    This makes it possible to for example define a custom type, e.g. EmailAddress, based on the .NET string type, with length 150 and a default of "undefined@example.com" for SQL Server and then define a field in an entity with type 'EmailAddress'. Creating the database tables from this model in the designer will then result in a default constraint on the table field the email address field is mapped on with value "undefined@example.com".
  • General editors per project element type:
    one editor which is kept open and will show the selected element in the project explorer, making it very easy to check / edit configurations on multiple elements. This will make it possible to e.g. edit or look at mapping data for several entities quickly by opening the general entity editor and opening the field mappings tab while selecting the entities to check / edit in the project explorer: the field mappings tab is kept the tab visible so the data of the selected entity is shown each time.
  • Intellisense helpers in QuickModel for types, names and relationship types:
    It's now possible to open helper lists of names in scope, types available and the list of relationship types to help you write quick model expressions more easily.
  • Hide / Filter warnings:
    It's now possible to hide / filter out warnings in the error/warning pane based on warning ID. The hidden/filtered out warnings are viewable again using a toggle and which IDs are filtered out is stored in the project.
  • Element selection rules on tasks (code generator).
    It's now possible to define selection rules on tasks in a run queue for the code generator which select which elements participate in the task, based on setting values. This makes it easy to define a setting for a user which is then taken into account in the code generator to execute different tasks based on the value of the setting.
  • New refactoring: replace selected fields with existing value type.
    This makes it easier to work with value types in the designer: if a selected value type matches (based on a set of defined rules) the selected fields, the fields are replaced with the selected value type and mappings are adjusted accordingly.
  • Automatically assign found sequences to entity fields based on a pattern (database first).
    Based on a name pattern the reverse engineering engine will select fields of entities which should get a sequence assigned to them, if the name pattern resolves to a name of a found sequence. This makes it easier to reverse engineer models from databases which use sequences for identity values, like Oracle and PostgreSQL.

LLBLGen Pro Runtime Framework

  • Expression support during Inserts
    It's now possible to define an expression on an entity field which is used during inserts. The expression defined is used to produce the field value.
  • Generate Typed Lists as POCO classes with a Linq or QuerySpec query.
    It's now possible to generate a typed list or all typed lists (controllable through settings) as a simple POCO class which holds the data of a row in the resultset and a Linq or QuerySpec query to execute the typed list.
  • Generate Typed Views as POCO classes and use them in Linq and QuerySpec.
    It's now possible to generate a typed view or all typed views (controllable through settings) as a simple POCO class and use it in Linq or QuerySpec queries.
  • Transparent Transient Error Recovery (adapter only).
    The transient error recovery system introduced in v4.1 has been upgraded so it can now be used transparently: define once and it is automatically used when executing a query. It's no longer necessary to explicitly execute a query through a recovery strategy.
  • Cached resultset tagging for easy cache purge/retrieval
    It's now possible to tag a query's resultset if that resultset is cached so the resultset can be retrieved from the cache using the tag and also it's now possible to purge the resultset(s) associated with the tag from the cache.
  • Action Combination support (see above).
    It's now possible to define an entity type as e.g. Read Only or Read / Create (or any of the other combinations) and the engine will automatically check at runtime whether an action (e.g. delete of an entity instance) is allowed or not and will deny the action if the action isn't part of the defined allows action combinations of the entity type.

Entity Framework

  • Code First support (Entity Framework v6+)
    It's now possible to generate Entity Framework v6 code with Code First mappings instead of EDMX using mappings. This allows you to keep using model first or database first modeling techniques in the designer and emit Code First output: POCO classes for the entity model and Code First code defining the model mappings.

NHibernate

  • Support for Read-only entities (See Action Combinations above)
  • String lengths are now emitted into the mappings:
    The lengths in the mappings make sure NHibernate makes the right decisions at runtime with respect to strings.

Minor features / changes

General

  • Support for <DependentUpon> element in CS/VBProj files (Code generator)
  • Support for default presets (Code generator)

Designer

  • Added a setting to control whether names are singularized during reverse engineering
  • When a relationship is marked as 'ModelOnly', the backing FK (and UC) of the original relationship when it wasn't model only, are now removed from the relational model data if the FK (and UC) are created by the designer and if there's no other model element relying on them (e.g. another relationship). Previously, they're kept around.
  • .NET 4.5.2 has been added as a supported platform
  • A directive has been added to the designer's config file to enable (it's disabled by default) high-DPI winforms support on .NET 4.5.2.
  • Context menus for entities in project explorer and model views have been re-ordered and more commands have been added to make working with the elements through context menus more convenient.
  • Stored procedure call parameters and Table-valued-function call parameters are now selectable in the code generation info tab and settings specifically for these elements will now show up there.
  • SQL Server 2014 is now a supported database (through the SQL Server driver/templates).
  • When a typed view was mapped onto a stored procedure resultset, it will now use the stored procedure name strip pattern instead of the Table Valued Function strip pattern to produce a proper procedure name for the macro {$ProcFunctionName}.
  • The default sorting on the error lister is no longer on 'Time' but on message type, Source so errors appear first, then warnings, and the messages are sorted within the message type on source, ascending.
  • In a typed list, when a relationship join hint was changed, the project wasn't marked as 'changed'.
  • When a project is loaded, all root nodes are now collapsed, which makes it easier to work with larger projects.
  • When a new element is added to the project, e.g. entity or typed view either directly or through reverse engineering, the state of the root nodes are remembered so the root nodes no longer all expand when an element is added, only the root node of the added element(s) is expanded to show the new elements.
  • Setting an existing field to a custom shortcut will set the maxlength/precision/scale In v4.2, setting an existing field to a shortcut which has a default length/precision/scale set will receive these values for maxlength/precision/scale, overwriting an existing value.
  • Multi-line input support in QuickModel: it's now possible to paste multiple lines with quick model statements in the input box, which are then executed one by one
  • Preference names are now beautified. Preferences are now properly word broken and lower cased, and thus easier to read than the previous preferences which were equal to the camelcased property names.
  • Catalog explorer details are now automatically shown when a node is selected if details viewer is open.
  • A 'Collapse Child Nodes' feature has been added to the context menu of certain nodes in project explorer and catalog explorer. All nodes which have child nodes which can have child nodes have now a 'Collapse Child Nodes' feature in their context menu, so it makes it easier to reset the tree to a workable form after many expand actions.
  • PostgreSQL driver now also obtains materialized views as 'views'. Postgresql servers v9.3 or later required.
  • It's now possible to define different default values for resultset retrieval. A driver will retrieve stored procedure resultsets using default values for the parameters of the procedures selected. At the wizard tab for stored procedure selection, the user can now click a button to define different default values for the supported value types (and string), to avoid stored procedures being excluded because they reject the original default values (e.g. a stored procedure which requires a value larger than 0 for an int parameter otherwise it will return with an error will now no longer do so if the default for int is set to a value larger than 0).
  • License file can now also be placed in 'My Documents\LLBLGen Pro'

LLBLGen Pro Runtime Framework

  • QuerySpec: Multiple calls to query.From(operand) are now appending the operand to the existing From clause if operand starts with QueryTarget. If it doesn't start with QueryTarget and there's an existing From clause, it will overwrite the existing From clause.
  • OData: The OData Support Classes now support the IgnorePropertiesAttribute on entity classes. The names specified using the attribute have to be defined on the entity type the attribute is defined on, so inherited properties can't be filtered out using this attribute.
  • Low level api: Duplicate sort clauses are now filtered out so accidentally added duplicates through e.g. OData are no longer causing exceptions at runtime.
  • Dynamic Query Engines: when the source of a field isn't known, the field creator functionality will no longer emit a dangling '.' but will simply only emit the field name/alias. This way constucts like .OrderBy("Name".Ascending()) will work, where the engine will emit ORDER BY [Name] ASC. Previously the above construct would result in ORDER BY .[Name] ASC which would fail.
  • Query traces: The value of a parameter of type DateTime value in a query is now emited as a ISO 8601 / Roundtrip formatted string, which is more precise than the previous 'ToString()' call on the DateTime which didn't include fractions of a second.
  • FunctionMappings added (Linq/QuerySpec): sbyte/byte/ushort/short/uint/int/ulong/long.ToString() mappings have been added to all DQEs for all databases.
  • EntityBase(2).AllowReadsFromDeletedEntities allows code to read from an entity that has been marked as deleted. It's been set to 'false' as the default which will result in an exception if code reads from a deleted entity, like in previous versions.
  • SQL Server 2014 is now a supported database (through the SQL Server DQE/templates). Use 2012 compatibility to utilize the 2012 or higher features.
  • FIX: QuerySpec: A projection lambda was created using a parameter which was created for every new query, which resulted in a new cache key for the lambda so the lambda was compiled every time instead of re-using a cached version. The lambda is now created using the same parameter as the original and the compiled version cached is re-used in subsequential executions of the same projection so query creation is a bit quicker.
  • FIX: QuerySpec: QuerySpec doesn't properly replace function mappings in derived tables.
  • QuerySpec: There's now a class available to create a projection lambda quickly for Select<T>() calls, called ProjectionLambdaCreator. This class has two overloads of its Create() method which creates at runtime a projection lambda for T from either a specified fields set or a fields creation class (e.g. CustomerFields). The overload which accepts the fields creation class caches the created lambda and is therefore much faster than a lambda created in code and compiled by the C# / VB.NET compiler which will create a new Expression<Func<T>> at runtime each time it is run.
  • QuerySpec: There's now a special Select method available which produces its own lambda projector from two types given: .Select<SomeDTO, SomeElementFields>()
  • QuerySpec: The usage of QueryTarget is now also supported in DynamicQuery / DynamicQuery<T> instances, but only in appending join operands to an existing query

  • QuerySpec. you don't need to create a new field for each targeted subquery field anymore, if you want to effectively clone the projection of a subquery. It's now possible to clone a projection of a derived table/aliased subquery in an outer query's Select() method.

  • It's now possible to generate case insensitive SQL for case sensitive databases using a setting.


Entity Framework

  • It's now possible to define the return type of a fetch method for a stored procedure call which returns a typed view: this is required if the stored procedure has output parameters as Entity Framework doesn't read the output parameters until the resultset has been enumerated which is too late. The setting controls whether the code generator will generate a work around for this or not. The work around does change the method's return type, hence the setting.

comments 6/2/2014 1:53:47 PM

door: Jan Karel Pieterse
I somehow managed to break the commenting function of my site. Today I fixed it again and you can go ahead and ask your questions or add your comments once more.
comments 6/2/2014 8:30:00 AM

door: Jan Karel Pieterse
My free Flexfind tool for Excel has been updated to build 584. Fixed a bug regarding replacing when search was done in Values.
comments 5/8/2014 11:35:00 AM

door: Jan Karel Pieterse
Wees er nu echt snel bij, want de registratie sluit al op 7 mei! Op 14 mei 2014 organiseren wij in Amsterdam de eerste "Amsterdam Excel Summit". Een absoluut unieke groep Excel MVP's zal in mei 2014 in Amsterdam zijn om hun geweldige Excel kennis met u te delen. Deze MVP's zijn in Amsterdam voor een bijeenkomst en wij zijn erin geslaagd om deze mensen voor ons evenement te boeken. Er is slechts weinig kans dat een dergelijke mogelijkheid zich nogmaals zal voordoen, dus wees er snel bij als u dit niet wilt missen!
comments 4/28/2014 12:15:00 PM