OK 
 
 
    Start | MVPs 


        MVP Weblogs


door:

This is a reply to "What ORMs have taught me: just learn SQL" by Geoff Wozniak.

I've spent the last 12 years of my life full time writing ORMs and entity modeling systems, so I think I know a thing or two about this topic. I'll briefly address some of the things mentioned in the article.

Reading the article I got the feeling Geoff didn't truly understood the material, what ORMs are meant for and what they're not meant for. It's not the first time I've seen an article like this and I'm convinced it's not the last. That's fine; you'll find a lot of these kind of articles on many frameworks/paradigms/languages etc. in our field. I'd like to add that I don't know Geoff and therefore have to base my conclusions on the article alone.

Re: intro

The reference to the Neward article made me chuckle: sorry to say it but bringing that up always gives me the notion one has little knowledge of what an ORM does and what it doesn't do. An ORM is just a tool to translate between two projections of the same abstract entity model (class and table, which result in instances: object and table row); it doesn't magically make your crappy DB look like one designed by CELKO himself nor does it magically make your 12 level deep, 10K object wide graph persist to tables in a millisecond as if there was just 1 table. Neither will SQL for that matter, but Geoff (and Neward before him) silently ignores that.

An ORM consists of two parts: a low level system which translates between class instances and table rows to transport the entity instances (== the data) back and forth, and a series of sub-systems on top of that to provide entity services (validation, graph persistence, unit of work, lazy / eager loading etc. etc.)

It is not some sort of 'magic connector' which eats object graphs and takes care of transforming those to tabular data of some sort with which you don't want to know anything about. It also isn't a 'magic connector' which reads your insanely crappy relational model into a dense object graph as if you read the objects from memory.

Re: Attribute Creep

He mentions attribute creep (more and more attributes (==columns) per relation (==table)) and FKs in the same section, however I don't think one is related to the other. Having wide tables is a problem but it's a problem regardless of what you're using as a query system. Writing projections on top of an entity model is easy, if your ORM allows you to, but even if it doesn't, the wide tables are a problem of the way the database is set up: they'll be a problem in SQL as well as an ORM.

What struck me as odd was that he has wide tables and also a problem with a lot of joins which sounds like he either has a highly normalized model, which should have resulted to narrow tables, or uses deep inheritance hierarchies. Nevertheless, if a projection requires 14 joins, it requires 14 joins: the data itself isn't obtainable in any other way otherwise it would be doable through the ORM as well (as any major ORM allows you to write a custom projection with joins etc. to obtain the data, materialized in instances of the class type you provide). It's hard to ignore the fact the author might have overlooked easy to use features (which hibernate provides) to overcome the problems he ran into and at the same time it's a bit odd a highly normalized model is the problem of the ORM and won't be a problem when using SQL (which has to work with the same normalized tables)

He says:

Attribute creep and excessive use of foreign keys shows me is that in order to use ORMs effectively, you still need to know SQL. My contention with ORMs is that, if you need to know SQL, just use SQL since it prevents the need to know how non-SQL gets translated to SQL.

I agree with the fact that you still need to know SQL, as you need to formulate the queries in your code in such a way that it leads to more efficient SQL; an ORM can do a bit of optimization but it is almost impossible to do without statistics/data (which are not available at that stage). But you can't conclude from that to 'just use SQL', as that's like recommending to learn to write Java Bytecode because the syntax of Clojure is too hard to grasp. A better conclusion would be to learn the query system better so you can predict the SQL which will be produced.

Re: Data Retrieval

Query performance is always a concern, and anything between code and the actual execution of the DML in the DB is overhead. Hand-optimized SQL might be a good option in some areas, but in the majority of cases queries generated by ORMs are fine, even hibernate's ;). Most ORMs have a query language / system which is derived from SQL to begin with (the mentioned hibernate does: HQL) and it is predictable what SQL it will roughly produce.

Sure, if you create deep inheritance hierarchies over your tables, you might run into a lot of joins, but that's known up front: inheritance isn't free, one knows what it will do at runtime. "Know the tool you're working with". If Geoff was surprised to see a lot of joins because a 14-entity deep inheritance hierarchy was pulled from the DB, he should have known better.

He says:

From what I've seen, unless you have a really simple data model (that is, you never do joins), you will be bending over backwards to figure out how to get an ORM to generate SQL that runs efficiently. Most of the time, it's more obfuscated than actual SQL.

I find this hard to believe with the query systems I've seen and written myself, with one exception: Linq. Linq is a bit different because it has constructs (like GroupBy) which are different in Linq/code than they are in the DB which require a translation of intend from the query to SQL and thus can / will lead to a SQL query which might not be what one would expect when reading the Linq query.

The usage of Window functions and other DB specific features (like query hints) might be something not doable in an ORM query language. There are several solutions to that though, one being creating DB functions which are mapped to code methods so you can execute the constructs inside your query using those methods which will result in using the functions in the SQL query, another being DB Views. They both require actions inside the RDBMS which is less ideal, but if it helps in edge cases, why not? They're equal to adding an index to some fields to speed up data retrieval, or creating a denormalized table because the data is read-only anyway and it saves the system using it a lot of joins.

Re: Dual schema dangers

Here I saw the struggle Geoff had with the concept of ORMs. This isn't uncommon, e.g. Neward (in my opinion) expresses the same struggle in his cited essay. There are two sides with a gap between them: Classes and Table definitions. If you start with classes and try to create table definitions from these it's equal as starting with the table definitions and try to create classes from these: both are the projection result of an abstract entity model and to get one from the other requires reverse engineering the side you start with to the abstract entity model it was the projection of and then projecting that to the side you want to create: starting from classes or table definitions doesn't matter.

I do understand the pain point when you start with either side and have to bridge the gap to the other side: without the abstract entity model as the one true source of truth, it's always a problem when one side changes to update the other side.

Geoff tries to blame this on the ORM but that's not really fair: the ORM is meant to work with both sides (class and tables) at run time, not at design time; it requires a system meant for modeling an abstract entity model to manage both sides, as both sides are the result of that model, not the source of it. (I wrote one, see 'Links to my work' at the top left. I didn't want to pollute this article with references to my work)

Re: Identities

Creating new entity instances which get their PK set by a sequence in the DB are the main cause of the problem if I understand Geoff's description correctly. In memory, these entities have no real ID and referring to them is a bit of a pain, true. But that's related to working with objects in general: any object created is either identified by some sort of ID you give it or its memory location ("the instance itself"). I don't get the struggle with the cache and partial commits: if you want to refer to objects in memory, it's equal to what you would do if they weren't persisted to a DB. That they get IDs in the DB in the case of sequenced PKs is not a problem: the objects get updated after the DB transaction completes. Even hibernate is capable of doing that.

Re: Transactions

This section is a typical description of what happens when you confuse a DB transaction with a business transaction. A business transaction can span more than one DB transaction, might involve several subsystems / services, might even use message queues, might even be parked for a period of time before commit. A DB transaction is more explicit and low-level: you start the transaction, you do work, you commit (or rollback) the transaction and that's it.

Geoffs reference to scope is good, it illustrates that there's a difference between the two and therefore you shouldn't use a DB transaction when you need a business transaction. However it's too bad he misses this himself. Often developers try to implement a business transaction at the level of an ORM by using its unit of work, but it's too low level for that: a business transaction might span several systems and an ORM isn't the right system to control such a transaction; it's meant to control one DB transaction, that's it.

That doesn't mean the ORM shouldn't provide the tools to help a developer write proper business transaction code with the systems controlling the business transaction. After all, the second part of an ORM is 'entity services' and one being 'Unit of work'. Most ORMs follow the Ambler paper and combine a Unit of Work with their central Session or Context object. This leads to the problem that you can't offer a Unit of Work without the central Session or Context object and thus when you actually want a Unit of Work to pass around, collecting work for (a part of) the business transaction, you don't want to deal with a Session / Context object which also controls the DB connection / transaction; it might be that at that level / scope it's not even allowed / possible to do DB oriented work.

It's therefore essential to have an ORM which offers a separate Unit of Work object, which solves this problem. Additionally to that, the developer has to be aware that a business transaction is more than just a DB transaction and should design the code accordingly.

Re: Where do I see myself going

A highly normalized relational model (4+ normal form) which is used to retrieve denormalized sets is not likely to perform well (as the chance of a high number of joins in most queries is significant), no matter what query system you're using. I get the feeling parts of what Geoff ran into is caused by reporting requirements (which often requires denormalized sets of (aggregated) data), parts are caused by inheritance hierarchies (not mentioned but according to the # of joins which were unexpected I think this is the case) and partly caused by poorly designed relational models.

None of those are solved magically if you use SQL instead of HQL or whatever query language you're using in an ORM. Not only is 'SQL' a query language and not a query system, it also doesn't make the core problems go away. Well, perhaps the inheritance one as you can't have inheritance in SQL, but then again, you're not forced to use inheritance in your entity model either.

He says:

By moving away from thinking of the objects in my application as something to be stored in a database (the raison d'être for ORMs) and instead thinking of the database as a (large and complex) data type, I've found working with a database from an application to be much simpler.

Here Geoff illustrates clearly a misconception about ORMs: they're not there to persist object graphs into some magic box in the corner, they're a system to move entity instances(==data) across the gap between two projections of the same abstract entity model. It's no surprise it turns out to be much simpler if you see your DB as part of your application, because it is part of your application. If we ignore the difference in level of abstraction, it's equal to talk to a DB through a REST service as it is to talk to a DB through an ORM which provides you with data: both times you go through an API to work with the entity instances on the other side. The REST service isn't a bucket you throw data in, and neither is the ORM.

Re: conclusion

SQL is a query language, not a query system. It's therefore not an alternative to the functionality provided by an ORM. ORMs make some things, namely the things they're built for, very easy. They make other things, namely the things they're not built for, hard. But the same can be said about any tool, including SQL (if we see a language as a tool): SQL is set oriented, and therefore imperative logic is hard to do, so one shouldn't do imperative logic in SQL. Blaming SQL for being crap in dealing with imperative logic doesn't make it so, it merely shows the person doing the blaming doesn't understand what SQL is meant to do and what it isn't meant to do.

In closing I'd like to not that what's ignored in the article is the optimized SQL ORMs generate with respect to e.g updates and graph fetches (eager loading). Left alone the fact that to execute the SQL query and consume the results, one has to write a system which is the core of any ORM: the low-level query execution system and object materializer.

It always pains me to read an article like Geoff's about a long struggle with ORMs as it's often based on a set of misconceptions what ORMs do and what they don't do. This is partly to blame on some ORM developers (let's not name names) themselves which try to sell the image that an ORM is a magic object graph persister and will turn your RDBMS into an object store. It's also party to blame on the complexity of the systems themselves: you don't simply learn how to use all of the ORM features and quirks overnight.

And sadly, it's also party to blame on the users, the developers using the ORMs, themselves. Suggesting a query language as the answer (and with that the tools that come with it) isn't going to solve anything: the root problem, working with relational data in an OO system, i.e. bridging the cap between class and table definition, still has to be solved, and using SQL and low-level systems to execute it will only move that problem onto your own plate, where you run the risk of re-inventing the wheel, albeit poorly.


comments 8/5/2014 12:53:54 PM

door:

In 2012, I thought it might be a good idea to register for a Windows Store Account, oh sorry, 'Windows Developer Services-account'. As you might recall, signing up was a bit of a pain. After a year, I decided to get rid of it as I didn't do anything with it nor did I expect to do anything with it in the future and as it costs money, I wanted to close the account. That too was a bit of a pain.

To sign up for a Windows Store Account/Windows Developer Services-account, Microsoft outsources the verification process to Symantec. The verification process is to make sure that the person who signed up (me) really works at company X (I even own it) and Symantec is seen by Microsoft to be up to the task to do that. As you can read in my sign-up blog post, the process includes Symantec contacting a person other than the person who registered for a company who also has to be entitled to make sure that I am who I am.

Is Symantec, a total different company than Microsoft, really up to the task? Well, let's see, shall we? As you can read above, I signed out of my Windows Store Account almost a year ago. One would think that by now Microsoft would have sent Symantec a memo in which they state that the individual 'Frans Bouma' is no longer a Windows Store developer card-carrier. In case they have (which I can't verify, pun intended), Symantec has a lousy way of keeping track, as last week my company received a lovely request from Symantec to verify with them whether 'Frans Bouma' was indeed working for my company and I was who I said I was. You know, for the Windows Developer Services account.

Now the following might read like I stepped into the oldest phishing trap in the book, but everything checked out properly, we use plain text email only, copied URLs over, the URLs were simple and legit.

We first thought it was spam/phishing so we ignored it. But this morning a new email arrived as a reminder. So we painstakingly went over every byte in the email and headers. Headers checked out (all routed through Verisign, now part of Symantec, and Symantec itself), URLs in the email checked out (we only look at plain text emails). The email was sent to the same person who verified me 2 years ago, and we concluded it must be legit. We had a good laugh about it, but what the heck, let's verify again. How would that work exactly, that verification process?

So we copied the url from the plain text version of the email (which was a simple url into Symantec) to a browser, it arrived at Symantec, listed info about my account, and all that's there to be done is click the verify button. It's laughably simple: just click a button! I do recall the first time it was a phone call, but instead of getting rid of this whole Symantec bullshit, Microsoft decided apparently that clicking a button instead is equal to 'making things simpler'.

After a couple of minutes, I received at my email box the email that cheered 'congratulations! I was re-verified and my Microsoft Developer Services account was renewed and I could keep developing apps for the windows store'.

But… I ended my account almost a year ago? Or did I? To verify whether I really got rid of this crap or not, I went to the sites I went before to register and end the account, but they only showed me XBox Live stuff, no developer account info.

Headers of reply email:

Received: from spooler by sd.nl (**********************); 29 Jul 2014 10:03:43 +0200
X-Envelope-To: frans********************
Received: from authmail1.verisign.com (69.58.183.55) by **********************
 (***********************) with Microsoft SMTP Server id 14.3.174.1; Tue, 29 Jul 2014
 10:06:08 +0200
Received: from smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com
 (smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com [10.244.24.61])	by
 authmail1.verisign.com (8.13.8/8.13.8) with ESMTP id s6T8674q001640
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)	for
 <frans@**********************>; Tue, 29 Jul 2014 08:06:07 GMT
Date: Tue, 29 Jul 2014 08:06:07 +0000
From: <microsoft.orders@symantec.com>
To: <frans@***********************>
Message-ID: <1717526233.2131406621167061.JavaMail.support@geotrust.com>
Subject: Informatie over Microsoft Developer Services-account **********************
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Loop-Check:
Return-Path: microsoft.orders@symantec.com
X-MS-Exchange-Organization-AuthSource: **********************
X-MS-Exchange-Organization-AuthAs: Anonymous
X-MS-Exchange-Organization-PRD: symantec.com
X-MS-Exchange-Organization-SenderIdResult: None
Received-SPF: None (**********************: microsoft.orders@symantec.com does not
 designate permitted sender hosts)
X-MS-Exchange-Organization-SCL: 0
X-MS-Exchange-Organization-PCL: 2
X-MS-Exchange-Organization-Antispam-Report: DV:3.3.13320.464;SID:SenderIDStatus None;OrigIP:69.58.183.55
X-MS-Exchange-Organization-AVStamp-Mailbox: MSFTFF;1;0;0 0 0
MIME-Version: 1.0

(replaced sensitive own info with ****)

I wonder: will Symantec for the rest of my life try to verify me as a Windows Store developer even though I have no longer a subscription on that service from Microsoft? The data in Symantec's databases about this account will likely never be purged unless they get rid of the account data from Microsoft entirely or I stop verifying (but even then).

In 2012 I already found it pretty bad that my account info with Microsoft was shared with another 3rd party, Symantec, and today I find it even worse: I no longer have a Windows Store dev account with Microsoft, but Symantec a) still thinks I do and b) keeps the information about me while I never had the intention to sign up with Symantec at all.

Microsoft will never attract large droves of devs writing apps for its Windows Store unless it makes the whole process seamless and without leaking sensitive information to 3rd party corporations who can do whatever they please with it.


comments 7/29/2014 11:03:39 AM

door:

We've released LLBLGen Pro v4.2 RTM! v4.2 is a free upgrade for all v4.x licensees and if you're on v3.x, you can upgrade with a discount.

For what's new, I'd like to refer to the what's new page on the LLBLGen Pro website. Smile


comments 7/2/2014 1:33:08 PM

door: Jan Karel Pieterse
RefTreeAnalyser 2.0 has just been updated. Improved performance of formula checking and added formula block highlighting. Ever had to work out the logic of other people's Excel files? Ever had to untie the spaghetti-knots of a large Excel workbook's formulas? Then you know what a nightmare this can be! Now there is the RefTreeAnalyser! With this tool, finding out how a cell in a workbook derives its results and what other cells depend on the cell is a breeze.
comments 6/20/2014 5:50:00 PM

door:

This morning we've released LLBLGen Pro v4.2 BETA! The beta is available to all v4 customers and can be downloaded from the customer area -> v4.2 section.

Below is the extensive list of new / changed features. Enjoy! Smile

LLBLGen Pro v4.2 beta, what's new / changed.

Main new features / changes

General

  • Allowed Action Combinations: Specify which actions are allowed on an entity instance: Any combination of Create/Read/Update/Delete.
    Supported on: LLBLGen Pro Runtime Framework (all combinations, R mandatory), NHibernate (CRUD and R).
    Action Combinations make it easy to define e.g. entities which can only be created or read but never updated nor deleted. The action combinations are defined at the mapping level and checked inside the runtime and are additional to the authorization framework.

Designer

  • Copy / Paste support for all model elements (entity, value type, typed list, typed view, table valued function call, stored procedure call):
    Paste full (with mappings and target tables) or just model elements, across instances (stand alone designer only) or within the project (VS.NET integration and standalone designer).
  • Automatic re-apply changed settings on existing project:
    e.g. changing a pattern for a name will reapply the setting on the existing model, making sure the names comply with the setting value.
  • New name patterns for auto-created FK/UC/PK constraints (model first).
    This makes it possible to define a naming pattern for e.g. FK constraints other than the default FK_{guid}. You can use macros to make sure the FK name reflects e.g. the fields and the tables it is referencing.
  • It's now possible to save search queries in the project file.
  • Ability to define default constraints for types, per type - DB combination (model first).
    This makes it possible to for example define a custom type, e.g. EmailAddress, based on the .NET string type, with length 150 and a default of "undefined@example.com" for SQL Server and then define a field in an entity with type 'EmailAddress'. Creating the database tables from this model in the designer will then result in a default constraint on the table field the email address field is mapped on with value "undefined@example.com".
  • General editors per project element type:
    one editor which is kept open and will show the selected element in the project explorer, making it very easy to check / edit configurations on multiple elements. This will make it possible to e.g. edit or look at mapping data for several entities quickly by opening the general entity editor and opening the field mappings tab while selecting the entities to check / edit in the project explorer: the field mappings tab is kept the tab visible so the data of the selected entity is shown each time.
  • Intellisense helpers in QuickModel for types, names and relationship types:
    It's now possible to open helper lists of names in scope, types available and the list of relationship types to help you write quick model expressions more easily.
  • Hide / Filter warnings:
    It's now possible to hide / filter out warnings in the error/warning pane based on warning ID. The hidden/filtered out warnings are viewable again using a toggle and which IDs are filtered out is stored in the project.
  • Element selection rules on tasks (code generator).
    It's now possible to define selection rules on tasks in a run queue for the code generator which select which elements participate in the task, based on setting values. This makes it easy to define a setting for a user which is then taken into account in the code generator to execute different tasks based on the value of the setting.
  • New refactoring: replace selected fields with existing value type.
    This makes it easier to work with value types in the designer: if a selected value type matches (based on a set of defined rules) the selected fields, the fields are replaced with the selected value type and mappings are adjusted accordingly.
  • Automatically assign found sequences to entity fields based on a pattern (database first).
    Based on a name pattern the reverse engineering engine will select fields of entities which should get a sequence assigned to them, if the name pattern resolves to a name of a found sequence. This makes it easier to reverse engineer models from databases which use sequences for identity values, like Oracle and PostgreSQL.

LLBLGen Pro Runtime Framework

  • Expression support during Inserts
    It's now possible to define an expression on an entity field which is used during inserts. The expression defined is used to produce the field value.
  • Generate Typed Lists as POCO classes with a Linq or QuerySpec query.
    It's now possible to generate a typed list or all typed lists (controllable through settings) as a simple POCO class which holds the data of a row in the resultset and a Linq or QuerySpec query to execute the typed list.
  • Generate Typed Views as POCO classes and use them in Linq and QuerySpec.
    It's now possible to generate a typed view or all typed views (controllable through settings) as a simple POCO class and use it in Linq or QuerySpec queries.
  • Transparent Transient Error Recovery (adapter only).
    The transient error recovery system introduced in v4.1 has been upgraded so it can now be used transparently: define once and it is automatically used when executing a query. It's no longer necessary to explicitly execute a query through a recovery strategy.
  • Cached resultset tagging for easy cache purge/retrieval
    It's now possible to tag a query's resultset if that resultset is cached so the resultset can be retrieved from the cache using the tag and also it's now possible to purge the resultset(s) associated with the tag from the cache.
  • Action Combination support (see above).
    It's now possible to define an entity type as e.g. Read Only or Read / Create (or any of the other combinations) and the engine will automatically check at runtime whether an action (e.g. delete of an entity instance) is allowed or not and will deny the action if the action isn't part of the defined allows action combinations of the entity type.

Entity Framework

  • Code First support (Entity Framework v6+)
    It's now possible to generate Entity Framework v6 code with Code First mappings instead of EDMX using mappings. This allows you to keep using model first or database first modeling techniques in the designer and emit Code First output: POCO classes for the entity model and Code First code defining the model mappings.

NHibernate

  • Support for Read-only entities (See Action Combinations above)
  • String lengths are now emitted into the mappings:
    The lengths in the mappings make sure NHibernate makes the right decisions at runtime with respect to strings.

Minor features / changes

General

  • Support for <DependentUpon> element in CS/VBProj files (Code generator)
  • Support for default presets (Code generator)

Designer

  • Added a setting to control whether names are singularized during reverse engineering
  • When a relationship is marked as 'ModelOnly', the backing FK (and UC) of the original relationship when it wasn't model only, are now removed from the relational model data if the FK (and UC) are created by the designer and if there's no other model element relying on them (e.g. another relationship). Previously, they're kept around.
  • .NET 4.5.2 has been added as a supported platform
  • A directive has been added to the designer's config file to enable (it's disabled by default) high-DPI winforms support on .NET 4.5.2.
  • Context menus for entities in project explorer and model views have been re-ordered and more commands have been added to make working with the elements through context menus more convenient.
  • Stored procedure call parameters and Table-valued-function call parameters are now selectable in the code generation info tab and settings specifically for these elements will now show up there.
  • SQL Server 2014 is now a supported database (through the SQL Server driver/templates).
  • When a typed view was mapped onto a stored procedure resultset, it will now use the stored procedure name strip pattern instead of the Table Valued Function strip pattern to produce a proper procedure name for the macro {$ProcFunctionName}.
  • The default sorting on the error lister is no longer on 'Time' but on message type, Source so errors appear first, then warnings, and the messages are sorted within the message type on source, ascending.
  • In a typed list, when a relationship join hint was changed, the project wasn't marked as 'changed'.
  • When a project is loaded, all root nodes are now collapsed, which makes it easier to work with larger projects.
  • When a new element is added to the project, e.g. entity or typed view either directly or through reverse engineering, the state of the root nodes are remembered so the root nodes no longer all expand when an element is added, only the root node of the added element(s) is expanded to show the new elements.
  • Setting an existing field to a custom shortcut will set the maxlength/precision/scale In v4.2, setting an existing field to a shortcut which has a default length/precision/scale set will receive these values for maxlength/precision/scale, overwriting an existing value.
  • Multi-line input support in QuickModel: it's now possible to paste multiple lines with quick model statements in the input box, which are then executed one by one
  • Preference names are now beautified. Preferences are now properly word broken and lower cased, and thus easier to read than the previous preferences which were equal to the camelcased property names.
  • Catalog explorer details are now automatically shown when a node is selected if details viewer is open.
  • A 'Collapse Child Nodes' feature has been added to the context menu of certain nodes in project explorer and catalog explorer. All nodes which have child nodes which can have child nodes have now a 'Collapse Child Nodes' feature in their context menu, so it makes it easier to reset the tree to a workable form after many expand actions.
  • PostgreSQL driver now also obtains materialized views as 'views'. Postgresql servers v9.3 or later required.
  • It's now possible to define different default values for resultset retrieval. A driver will retrieve stored procedure resultsets using default values for the parameters of the procedures selected. At the wizard tab for stored procedure selection, the user can now click a button to define different default values for the supported value types (and string), to avoid stored procedures being excluded because they reject the original default values (e.g. a stored procedure which requires a value larger than 0 for an int parameter otherwise it will return with an error will now no longer do so if the default for int is set to a value larger than 0).
  • License file can now also be placed in 'My Documents\LLBLGen Pro'

LLBLGen Pro Runtime Framework

  • QuerySpec: Multiple calls to query.From(operand) are now appending the operand to the existing From clause if operand starts with QueryTarget. If it doesn't start with QueryTarget and there's an existing From clause, it will overwrite the existing From clause.
  • OData: The OData Support Classes now support the IgnorePropertiesAttribute on entity classes. The names specified using the attribute have to be defined on the entity type the attribute is defined on, so inherited properties can't be filtered out using this attribute.
  • Low level api: Duplicate sort clauses are now filtered out so accidentally added duplicates through e.g. OData are no longer causing exceptions at runtime.
  • Dynamic Query Engines: when the source of a field isn't known, the field creator functionality will no longer emit a dangling '.' but will simply only emit the field name/alias. This way constucts like .OrderBy("Name".Ascending()) will work, where the engine will emit ORDER BY [Name] ASC. Previously the above construct would result in ORDER BY .[Name] ASC which would fail.
  • Query traces: The value of a parameter of type DateTime value in a query is now emited as a ISO 8601 / Roundtrip formatted string, which is more precise than the previous 'ToString()' call on the DateTime which didn't include fractions of a second.
  • FunctionMappings added (Linq/QuerySpec): sbyte/byte/ushort/short/uint/int/ulong/long.ToString() mappings have been added to all DQEs for all databases.
  • EntityBase(2).AllowReadsFromDeletedEntities allows code to read from an entity that has been marked as deleted. It's been set to 'false' as the default which will result in an exception if code reads from a deleted entity, like in previous versions.
  • SQL Server 2014 is now a supported database (through the SQL Server DQE/templates). Use 2012 compatibility to utilize the 2012 or higher features.
  • FIX: QuerySpec: A projection lambda was created using a parameter which was created for every new query, which resulted in a new cache key for the lambda so the lambda was compiled every time instead of re-using a cached version. The lambda is now created using the same parameter as the original and the compiled version cached is re-used in subsequential executions of the same projection so query creation is a bit quicker.
  • FIX: QuerySpec: QuerySpec doesn't properly replace function mappings in derived tables.
  • QuerySpec: There's now a class available to create a projection lambda quickly for Select<T>() calls, called ProjectionLambdaCreator. This class has two overloads of its Create() method which creates at runtime a projection lambda for T from either a specified fields set or a fields creation class (e.g. CustomerFields). The overload which accepts the fields creation class caches the created lambda and is therefore much faster than a lambda created in code and compiled by the C# / VB.NET compiler which will create a new Expression<Func<T>> at runtime each time it is run.
  • QuerySpec: There's now a special Select method available which produces its own lambda projector from two types given: .Select<SomeDTO, SomeElementFields>()
  • QuerySpec: The usage of QueryTarget is now also supported in DynamicQuery / DynamicQuery<T> instances, but only in appending join operands to an existing query

  • QuerySpec. you don't need to create a new field for each targeted subquery field anymore, if you want to effectively clone the projection of a subquery. It's now possible to clone a projection of a derived table/aliased subquery in an outer query's Select() method.

  • It's now possible to generate case insensitive SQL for case sensitive databases using a setting.


Entity Framework

  • It's now possible to define the return type of a fetch method for a stored procedure call which returns a typed view: this is required if the stored procedure has output parameters as Entity Framework doesn't read the output parameters until the resultset has been enumerated which is too late. The setting controls whether the code generator will generate a work around for this or not. The work around does change the method's return type, hence the setting.

comments 6/2/2014 1:53:47 PM

door: Jan Karel Pieterse
I somehow managed to break the commenting function of my site. Today I fixed it again and you can go ahead and ask your questions or add your comments once more.
comments 6/2/2014 8:30:00 AM

door: Jan Karel Pieterse
My free Flexfind tool for Excel has been updated to build 584. Fixed a bug regarding replacing when search was done in Values.
comments 5/8/2014 11:35:00 AM

door: Jan Karel Pieterse
Wees er nu echt snel bij, want de registratie sluit al op 7 mei! Op 14 mei 2014 organiseren wij in Amsterdam de eerste "Amsterdam Excel Summit". Een absoluut unieke groep Excel MVP's zal in mei 2014 in Amsterdam zijn om hun geweldige Excel kennis met u te delen. Deze MVP's zijn in Amsterdam voor een bijeenkomst en wij zijn erin geslaagd om deze mensen voor ons evenement te boeken. Er is slechts weinig kans dat een dergelijke mogelijkheid zich nogmaals zal voordoen, dus wees er snel bij als u dit niet wilt missen!
comments 4/28/2014 12:15:00 PM

door: Jan Karel Pieterse
Be quick to join us in Amsterdam on May 14 2014, for the registration on the first Amsterdam Excel Summit closes on May7th! An absolute unique group of Excel MVP's will gather in Amsterdam to share their expert knowledge with you. The Excel MVP's happen to be in Amsterdam for a meeting and we've succeeded in getting some of them to present at our event. There is not much chance on this happening again anytime soon, so make sure you register!
comments 4/28/2014 12:15:00 PM

door: Jan Karel Pieterse
I have added a new article to my site, describing a class module to help measuring your VBA performance.
comments 4/15/2014 7:15:00 PM

door: Jan Karel Pieterse
Als je wel eens die vervelende melding hebt gehad over kringverwijzingen toen je een formule maakte of toen je een Excel bestand opende, lees dan dit nieuwe artikel. Excel detecteert een kringverwijzing zodra een reeks formules ertoe leidt dat dezelfde cel meer dan eens in serie berekeningen wordt aangedaan. Veel gebruikers vinden de kringverwijzingen melding uiterst verwarrend en hebben geen idee waardoor die verschijnt. In dit artikel tracht ik het mysterie omtrent deze situatie weg te nemen.
comments 4/14/2014 1:00:00 PM

door: jubo
Has been a long time ago here, but today is an important day. Microsoft ends support and security updates for Windows XP. It was a great platform but it’s time for a change. Check out your options here.
comments 4/8/2014 5:58:12 PM

door: Jan Karel Pieterse
RefTreeAnalyser 2.0 has just been updated. Added hotkeys for Check Formulas and for Objects. Ever had to work out the logic of other people's Excel files? Ever had to untie the spaghetti-knots of a large Excel workbook's formulas? Then you know what a nightmare this can be! Now there is the RefTreeAnalyser! With this tool, finding out how a cell in a workbook derives its results and what other cells depend on the cell is a breeze.
comments 4/3/2014 6:00:00 PM

door: Jan Karel Pieterse
RefTreeAnalyser 2.0 has just been updated. Improved reporting and fixed a bug or two. Ever had to work out the logic of other people's Excel files? Ever had to untie the spaghetti-knots of a large Excel workbook's formulas? Then you know what a nightmare this can be! Now there is the RefTreeAnalyser! With this tool, finding out how a cell in a workbook derives its results and what other cells depend on the cell is a breeze.
comments 3/31/2014 7:30:00 PM

door: Jan Karel Pieterse
I have added a new page to my article on Slicers, which shows you how to get the selected items of a slicer into a worksheet cell.
comments 3/21/2014 4:45:00 PM