Stored procedures are bad, m'kay?

Let me start with a blunt statement: stored procedures are bad, they are a bad way to formulate data-access logic. I can't state that enough. Today I stumbled into a blog by Rob Howard, which tries to convince the reader that whatever you do, use stored procedures! With tears in my eyes I've read the arguments he brings to the table. Some are silly and one is even based on completely wrong information and assumptions and so far away from the truth it hurts. I've blogged about stored procedures before (here, here and here) and I used them a lot for 8 years, but I'm now almost stored procedure 'free' for 8 months now, and I love the feeling. The reason is obvious: the stress of maintaining a lot of stored procedures, to write another stored procedure for each thing you want to do, is gone. Dynamic SQL is the future. (Dynamic SQL is generated on the fly by a generic piece of code which gets various data as input and generates a parametrized query from it. Which can be cached on the client and will be cached on the server. Dynamic SQL generated based on objects written in C# or VB.NET).

Let's walk through Rob's list of arguments and see what we can make out of it. Let me start by saying that Rob's list of arguments is the default list of arguments any DBA will offer you when you ask him/her why stored procedures are better. However, keep in mind that the DBA most likely will offer you that list to save his/her job.

  • Ad-hoc SQL Script is brittle
    Sure, it is generated on the fly, it is gone when it is executed (except for the execution plan, Rob, but more on that later). Rob comes with the argument that putting ad-hoc SQL in your BL code is not that good. Agreed, but who said that there is just one other option to solve that problem and that option is stored procedures? ... small changes to the database can have severe impacts on the application. Yes, that's right, however changes to a relational model will have always an impact on the application that targets that model: add a non-nullable column to a table. You can use stored procedures or ad-hoc queries, you have to change the calling code to make sure that column gets a value when a new row is inserted. For Ad-hoc queries, you change the query, and you're set. For stored procedures, you have to change the signature of the stored procedure, since the INSERT/UPDATE procs have to receive a value for the new column. This can break other code targeting the stored procedure as well, which is a severe maintenance issue. A component which generates the SQL on the fly at runtime, doesn't suffer from this: it will for example receive an entity which has to be saved to the database, that entity contains the new field, the SQL is generated and the entity is saved. No maintenance problems. With a stored procedure this wouldn't be possible.
  • Security
    Ah yes, the good old dead horse, security! The most heard argument in favor of stored procedures is that with using stored procedures, security can be controlled using a fine grained mechanism: place the execution rights on the procedure and you're set. Well... ever heard of role-based security? Rob Howard hasn't obviously. Security is a subject that has to be taken seriously, very seriously. Therefore, a good DBA defines user-roles in SqlServer. Users are added to those roles and rights are defined per role, not per user. This way, you can control which users can insert / update and which users can for example select or delete or have access to views in an easy way: in most applications you have 2 roles: the average user, and the administrator user which configures the application. Define 2 roles in the SqlServer database, place the users in the right role, define the rights per role on the tables and views and off you go: fine grained security which works, without a single stored procedure in sight. A new user is added? You just add it to a user role and it has the rights it should have.
    Another nice object in SqlServer, which I already mentioned earlier, is the view. Views are there to control which data is accessed on a column basis or row basis. This means that if you want user U to select only 2 or so columns from a table, you can give that user access to a view, not the underlying table. The same goes for rows in one or more tables. Create a view which shows those rows, filtering out others. Give access rights to the view, not the table, obviously using user-roles. This way you can limit access to sensitive data without having to compromise your programming model because you have to move to stored procedures. Views are totally ignored by Rob Howard, while the view is especially created for this purpose. See Books Online: Using Views as Security Mechanisms.
    Rob mentions SQL injection attacks. SQL queries which are constructed by simply concatenating data into a query statement are indeed open for SQL injection attacks. However because ADO.NET has a great parameter support, why would anyone neglect this fine instrument? A good Dynamic SQL engine creates parametrized queries, which are not only faster (because the execution plan is cached, yes Rob, it is), they are also not open for SQL injection attacks due to the parameters.
  • Performance
    Everyone who thinks stored procedures are pre-compiled, say "Aye!". Whoa, what a noise! For all of you who said "Aye!" a few seconds ago: open SqlServer's Books Online (v7 or v2000, doesn't matter), search for "cache execution plan". You'll find fine articles like "Execution Plan Caching and Reuse" and "SQL Stored Procedures". Let me just quote some lines from the "SQL Stored Procedures" article:
    SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans.
    I didn't make that up, people. It's there, for a long time (since SqlServer 7.0).
    However, what does Rob Howard say? I quote:
    There are also internal performance benefits to SQL Server for using stored procedures vs. ad-hoc SQL script. When stored procedures are used SQL Server can cache or pre-compile the ‘execution plan’ that it uses to execute the SQL vs. having to recalculate the execution plan on each request.
    Isn't that a little wrong, Rob? There is no precompilation of stored procedure code in SqlServer. It caches execution plans for each query, also ad-hoc queries. Even better: it will parametrize queries which don't even have parameters to keep the execution plan in the cache!
    Rob also claims that you can better loop through rows in T-SQL (in 'many cases') than in the data-access layer because doing it in T-SQL is faster (according to Rob). Now, looping through rows in T-SQL requires a cursor. A cursor will always create a temp table in tempdb, which can hurt performance pretty bad if tempdb is full and has to be resized. Doing a lot of looping will require that sooner or later. Besides that, T-SQL is not a procedural language, it's a set-based language (© Joe Celko). This means that the only power T-SQL has over procedural languages is that it can work on sets of data very fast. Looping through a resultset means that you didn't understand what set-based means or what you want can't be done in a set-based batch of statements. It's then better to choose the best language for the procedural statements instead of using a set-based language for this and feed it the least amount of data to perform its task.
    Now, let me add something about performance here. Say, you have a database with 100 tables, with an average of 7 fields per table. This requires 100 update stored procedures, at least (the crUd procs). These 100 procedures will have on average 7 parameters to update all fields at once, because you can't create optional update statements in stored procedures. You can't create stored procedures for all possible combinations either, that would require 100*7! procedures, which is quote a lot. A component which will create the SQL on the fly will tailor the UPDATE query to the job it has to do, for example updating that single varchar field in a row with 2 blob fields. The blob fields aren't passed to the database again, because that field isn't updated. Another nice thing about creating SQL on the fly is that you can create the WHERE predicate on the fly. Foreseeing every possibly WHERE predicate for an UPDATE stored procedure is undoable. You can fall back to optional parameters, but that will do you no good either, because COALESCE isn't that great when it comes to performance. (see my previous blogs about dynamic SQL and stored procedures).

Stored procedures also will open up a maintenance problem. The reason for this is that they form an API by themselves. Changing an API is not that good, it will break a lot of code in some situations. Adding new functionality or new procedures is the "best" way to extend an existing API. A set of stored procedures is no different. This means that when a table changes, or behaviour of a stored procedure changes and it requires a new parameter, a new stored procedure has to be added. This might sound like a minor problem but it isn't, especially when your system is already large and runs for some time (when it becomes 'legacy' but the amount of time and money invested in the system is that huge that replacing it will cost more than altering the current system). Every system developed runs the risk to become a legacy system that has to be maintained for several years. This takes a lot of time, because the communication between the developer(s) who maintain/write the stored procedures and the developer(s) who write the DAL/BL code has to be intense: a new stored procedure will be saved fine, however it will not be called correctly until the DAL code is altered. When you have Dynamic SQL in your BL at your hands, it's not a problem. You change the code there, create a different filter, whatever you like and whatever fits the functionality to implement. A good O/R mapper helps you with this. Microsoft also believes stored procedures are over: Microsoft's next generation business framework MBF is based on Objectspaces, which generates SQL on the fly.

I too find writing code like string s = "SELECT * FROM Foo WHERE Bar = " + barValue; in your code not the right thing to do. However the alternative is not stored procedures, it's a component that generates this SQL on the fly so you don't have the disadvantages of stored procedures and have the advantages of generating the SQL you need. Writing such a component is a one-time effort, you can reuse that component each time you access a database. Stored procedures are not the answer, Dynamic SQL is.

100 Comments

  • Duane: can you elaborate on that a bit? Just stating it's silly great but not a solid argument.



    That said, I write a stored procedure sometimes, the last one I wrote was a sequence emulator stored procedure. However the occasions I have to are countable on one hand, while I do database development every day.

  • All I can say is that I respectfully disagree 150%! Stored Procedures do provide a dramatic performance boost over dynamic SQL when needed. A generic DB access component is a disaster waiting to happen, as it depends on a properly designed database, which no one will ever create.



    Wally

  • Taking a binary approach is silly, you can't sriously be saying that for every occasion dynamic Sql is better - a few instances where I disagree - Indexed Views, these are essentially SPs, and are invaluable in a scalable environment (Indexed views representing an essentially 'static' view on data, updated when the underlying data is). Using Table Variables in SQL Server 2000 from Stored Procedures lets me composite and filter my resultset before returning it to the client app thereby substantially reducing my network traffic, connection usage, memory footprint for the app and therefore vastly increasing my potential scalability. Your argument about using Views is pretty limited in my opinion, Views are not updatable, I can't get a resultset from one SP and based on that, update the data in my view then filter the data in my View to produce the desired resultset - even if I could, I would be storing a HUGE resultset for many operations where I only require a couple of items - Views cannot replace SPs in most circumstances.

    Quite simply, I also do serious web application development every single day, 99% of requests to the DB are best served by stored procedures, I have worked a lot with Dynamic SQL in code previously (forced to when using MySQL with PHP / Java), the risk of introducing errors into really quite complex Dynamically built SQL statements is really pretty high - plus it's pretty tricky to debug every possible scrnario unde which the statement may be built.

  • Wallym: the dramatic performance boost is not there, because the stored procs are threated as queries which arrive at the sql engine: they are compiled at runtime.



    So they're not faster, they are sometimes even slower, because you have to code constructs in the stored proc to add some sort of flexibility (like a bit parameter which is used in an IF ELSE clause. That will force a recompile every time!).



    Don't believe the myth, read the docs (Books Online). They're clear on this.

  • Indexed views can be accessed as normal views, by normal SQL. Views are something else than a stored procedure, Scott. Also your argument about connection usage is odd: connection pools will serve you well.



    True, it requires some work to get a dynamic query generator right, but once you have one, you don't have to recode it again.



    "I can't get a resultset from one SP and based on that, update the data in my view then filter the data in my View to produce the desired resultset - even if I could, I would be storing a HUGE resultset for many operations where I only require a couple of items - Views cannot replace SPs in most circumstances. "

    Views are used to control the security issues. SP's are code-statements, views can be seen as static tables, for SELECT statements, FROM clauses and WHERE predicates. you can for example update rows in a table using the FROM clause to filter on a view's data. Views aren't limited, they are as flexible as tables, because they are a view on tables, replacing those tables in logic which would otherwise be performed on the tables. Views don't replace update stored procedures, they can be used to control security where sp's are used to do that.

  • sprocs performance increase isn't just due to the precompilation. It is also due to the reduction of round tripping to the client (which isn't cheap), and reduction of network traffic that comes with it (ahh, good times).



    As for the security mechanisms, I do a lot of training with healthcare and government bodies, and regulations require them to not give developers access to the underlying tables. Period. I guess you could use views for everything, but you'll have to be writing after triggers to get that to work and at that point you might as well be writing stored procedures anyways.



    And most people are doing security for more than two roles. If you are doing security on a column basis for even just five roles, may god have mercy on your soul. You will screw up, you will waste time and you will end up opening more data to people that don't need it. Just like security based on roles is easier to implement, security based on tasks is also easier to maintain and implement.



    Of course, that's just my opinion, I could be wrong. It's interesting to see everyone else's opinion on the subject though.

  • Phil...the round trip argument is pretty much what I was trying to get to with the connection stuff...I also completely agree with your other arguments :-)

  • Excellent post. We had a major project at my last employer that had a DBA writing (or at least approving) the stored procs for rather routine CRUD, a junior developer writing wrapper classes around all of those stored procs, and then the application developers only had to worry about consuming those data libraries. Looked great on paper, since this was an n-tier architecture with lots of separation, and we all know that's supposed to make our lives and maintenance easier. The problem was that it did not make anything easier, since nearly every change in the database required at least 2 people, if not 3. These changes also had to be coordinated very well, since anyone done out of sync, or too late, would break the build, and this did indeed happen. I think our DBA felt very threatened when I talked about O/R mappers, but I honestly think these tools would actually finally have given him the freedom to worry about the database design and performance optimization, instead of writing routine CRUD procs. The junior developer would also be freed up to actually become a real application developer, instead of just being another wasted body. So, down with stored procedures -- long live SQL!

  • @phil: 'It is also due to the reduction of round tripping...'



    This is not the case, you can use a whole batch of T-SQL commands in a single call resulting in a single network roundtrip, so it is the same number of round trips as if you would call a SPROC: 1!

  • Marc: Good call on saving the round tripping with multi-statement SQL.



    But I would hate to run into this type of code personally. I mean, you are basically writing a stored procedure at that point and hard coding it into your app. Most of my development is in VB, but I would have to switch to C# just to be able to get away with not having to use the dreaded _ and & to build the string together to get that sucker to run, and be maintainable. Once again, my personal preference.



    Now as Paul brings up, your opinion of stored procedures really depends on your environment too. I personally would I simply set fire to the building with the three person tag team on the CRUD development.



    It might just be the relatively small nature of the projects I've been working on (15-30 tables), but I've managed to escape writing simple CRUD sprocs. Most of my sprocs look something like "AddOrder", "GetOrdersByCustomerID" and things like that. I have been given the power of writing my own sprocs, if I had to go to a DBA to get that done, well, if you've seen kill bill you know the type of violence that would ensue.



    The only time I seem to not be using stored procedures is when I'm working with data adapters and datasets. I see no real reason to use stored procedures for the reasons outlined here by everyone else. But in my connected environment land, I'd rather use stored procedures instead of using the work arounds that people have presented here.

  • Paul: exactly my point.



    Phil: giving developers access to stored procs is not helping you a lot. You have to insert/update data anyway, there ARE sp's in the API which do that. So the argument is pretty moot if you ask me, but nevertheless, if you are REQUIRED to do so (even when the arguments for those requirements are not that great) you don't have a choice. If you DO have a choice: think twice.

  • @Phil Scott:

    ::But I would hate to run into this type of

    ::code personally. I mean, you are basically

    ::writing a stored procedure at that point and

    ::hard coding it into your app



    Totally wrong.



    If you do it right, you write a SQL generator that is able to take advantage of batching, as I do right now with the EntityBroker. Nothing else.



    My programs will use batching, but I will never "code a SP into my app". Never.

  • @Phil: Is it really that different? - You are going to 'harcode' it anyway: You build your SQL either with Query Analyzer or another tool. When you are done, you type in your SQL code into an editor (=hardcode it) and run the program to get the results. Now, in one case your editor may be the SQL Enterprise Manager, creating a SPROC. In the other case this may be VS.NET and you enter the T-SQL in a VB.NET or C# script. In any case, you effectively 'hardcoded' it!



    And I agree the @ really helps :)

  • Marc:



    How is:

    myCustomer.Save();



    or

    DaoAdapter dao = new DaoAdapter();

    dao.Save(myCustomer);



    hardcoding sql?

    It's not. You require logic to distinguish insert from update, when update, what to update, etc. Nothing is hardcoded. That's the point.



    Filtering on entities in the database is dynamic: you formulate your filter right there where you need it. This for example requires 2 arguments. If you want that in a stored procedure, you would add 2 parameters. Another query in your app does the same filter, but with 3 arguments. You have to alter the stored proc or create a new one. Adding a new parameter will break the 2 parameter call. So you have to alter THAT CODE as well. Which is weird, because with the dynamic query you wouldn't have to alter that query.

  • Right-on! I've been on the stored proc bandwagon long enough.



    Eliminating stored procs is not silly. I'll take a nice and tight database layer over procs any day.

  • @Frans:



    I was just replying to Phil as he said he feels that using SQL inside code is 'hardcoding' it (if I understood him correctly). So I was just comparing SPROCs to 'hardcoded' SQL inside a SqlCommand, for example :)



    Now, if you enter the world of dynamic SQL for CRUD operations you are absolutely right. If you don't use such a ORM tool, you _will_ have to hardcode your CRUD SQL somewhere, either in a SPROC or inside a call to, say, SqlCommand()!!

  • @Mike:



    >Simply wrong. There are things a Stored Procedure can do that would take many round trips for Dynamic SQL.



    Care to share an example?



    >If you think sending a huge multi-statement query request across the wire is more efficient that a simple SP call...



    Nobody says that. The point is that a multi-statement query will be _as fast_ as a simple SP call. Not faster, not slower :)



    If you work across a 56k modem to your DB, you are indeed working on a different set of problems. In this case, I agree with you that even the few extra bytes like a few hundred bytes of T-SQL code may add up.

  • Marc: hardcoding crud statements is not something you have to do. Using a dataset for example, you can create your insert/update statements on the fly. The data you have to use in an insert/update statement lives somewhere, you can use that to create the statement. Using a generic block of code you can reuse.

  • So, when is your Database Object released? :P

  • Marc, you understood me correctly. Just poor wording on my part. At 10am my brain has flipped it's "write coherent" bit on.



    Of course it isn't that different, it is just a matter of preference for me. I'd much rather have that code as stored procedure on the server than on the client, that's all. And as Mike said, depending on the type of connection sending a 2000 byte sql statement vs a 20 byte stored procedure name might add up depending on the architecture and the clients needs.



    O/R tools, and dynamic sql generators are always great tools to have, and if I have access to an easy to use and powerful one I'd take that any day of the week over writing sprocs by hand. But not everyone has the resources or even the need to have such a tool on hand.



    I can't see how anyone in this case has been totally wrong or how one way of doing things would be the "right way" over another. Each method is going to have it's pros and cons. I think if you are using an O/R tool or the DataAdapters you aren't going to be seeing any benefits from using stored procedures. But for those who are (god forbid) using DataReaders and sending SqlCommands directly, I feel I'd rather use sprocs for that.

  • @Frans: Using CommandBuilder?? - This is definitely _not_ a good idea. It may work for simple tables, but as soon as you start to have more complex scenarios, it stops working. 'Complex' scenarios start with using JOINs and multiple PKs etc.

  • At my company, we have 3 software products that work against SQL Server, Oracle, and DB2 with the flip of a switch. We have built it using dynamic SQL within our code and all 3 have been around for years. These are big apps and, in the end, they are very maintainable.



    Imagine trying to sell your product to a company's IT group and saying that it relies on stored procedures. Guess what...the DBA is going to shoot it down because he knows that as new releases come out, it is going to be a huge pain in the ass for them to update and maintain changes to the SP's. Now tell them that they don't have to do anything. All the SQL is managed in the code itself. Well, you know their reaction.



    Maybe an in-house application where I know the back-end will always be SQL Server or Oracle or whatever, I would use SP's and love them. That's just not the case for a product-based application. At least in my experience it isn't.

  • Marc: no not using commandbuilder:) Using a component you write yourself (or buy). You feed it the objects it should generate DML for and it generates that DML, including parameters. Execute that and you're set. Cache the result on the client if you will so you don't have to regenerate the statement. A dataset has the schema inside itself needed to generate the code you need. (I use the 'dataset' example, since Scott accused me of writing a blog to sell my O/R mapper LLBLGen Pro which is not the case at all :) ).

  • As far as Paul's comment about breaking client code by making changes to the stored procedures: This is no different than changing the rows a view returns as far as breaking things, and as for parameter changes, the best way to handle this is to add a parameter with a default value that works for everyplace the SP is called except the place that needs the new parameter. A good DBA does not break client code. A bad DBA will mess up any architecture.

  • @Frans



    Yepp, like I said, I was not trying to argue against the advantages of using something like ObjectSpaces ;) - But if you don't have that at hand, you are likely to hardcode your SQL code either into a SPROC or inside C# (or whatever) code. JW makes a good point though, that such a tool should be able to support both auto-generated SQL code as well as existing SPROCs as it simply is a requirement for some companies. Still, it doesn't change the general argument that we have here about the technical pros/cons of SPROCs. The business side is still something else to consider on top. On the other side, if you develop a product and deploy your own DB, I definitely want to make sure that the DBA _has no chance in hell_ to 'tweak'/'enhance'/ruin our SQL code ;)

  • I think you need to address the comments about Round Trips. The biggest aspect of stored procs is that they live close to the data. So, if you are doing simple processing - sure. However, if you are doing complex processing, requiring multiphase transactions - interim result sets (changing logic and flow based on interim result sets) - I don't see why stored procs wouldn't be used. When you need to pass everything out of SQL - you are working on layer (the network layer) that is literally 1000's of times slower then memory. This isn't good. I don't know about apps that you write - but apps that I write tend to have very complex rule implementations - and data manipulation isn't confined to simple SQL statements. They may require many many many nexted transactions - sub-queries, multi-stage processing, etc.



    I do agree, however, that sp's are misused for the wrong reasons with regard to performance, security, etc. But, in the real world (architecturally) they are a fact of life. If they didn't represent some level of goodness, why has Oracle been promiting Java in the db for years. Why are we so excited about having managed code live in Yukon? - its all about round trips and reducing network traffic as much as we can. Chatty apps are bad apps.

  • Some stored procedures are pretty complex and big in size. Having them as sql queries being resent by hundreds of clients is a network bottleneck. They also eliminate round trips. You can encapsulate all your SQLs in one call.



    Some SQL statements span different databases across different servers (linked servers). This means these clients need to to have access to these servers.



    How come your first LLBLGen used SPs only?



    Abdu

  • "How come your first LLBLGen used SPs only?"

    I wrote that tool in a time when I was fully convinced stored procedures were much better, and way faster than dynamic sql. After I've hit the wall and learned that this is not the case, I gave up on stored procedures. That's why LLBLGen Pro generates all sql on the fly. It still can call sp's if you want to, but not for entity usage.

  • @Joel



    I don't think anyone is asking to drop SPROCs off the feature list. I totally agree that there are good reasons/cases when you should use SPROCs. The general argument started with Rob saying that you should use SPROCs always and everywhere. That's where many people including myself disagree. But that doens't mean that SPROCs are useless, quite the opposite.



    Yukon solves many problems I/we have with SPROCs, for example versioning and multi-developer scenarios, let alone the whole deployment stuff. Still, I think that tools like ObjectSpaces, EntityBroker or the beloved LLBLGen have their merits even in the Yukon timeframe. Using parameterized queries still will keep the action where it should be (on the DB server using SQL) and the few extra bytes on the line surely don't make a real difference for most apps, at least IMHO.



  • @Rob



    From your original post:

    >At just about every talk I give I always try to make several consistent statements. One of which is: ‘Whenever possible use stored procedures to access your data’.



    I think this statement is much to general in nature, so I don't feel I misquoted you saying 'with Rob saying that you should use SPROCs always and everywhere'.

  • ::>At just about every talk I give I always

    ::try to make several consistent statements.

    ::One of which is: ‘Whenever possible use

    ::stored procedures to access your data’.

    ::

    ::I think this statement is much to general in

    ::nature, so I don't feel I misquoted you

    ::saying 'with Rob saying that you should use

    ::SPROCs always and everywhere'.



    I would say the original statement is cromoinally wrong.



    See, "Whenever possible" means "also when it makes no sense". Even when you end up with a simple "select * from table where id = ?" in the SP, this would mean "use a SP, just because it is possible".



    Basically, the statement means: use sp's for everything unless you use MS Access or another database which does not know SP's.



    The quote from Frans (sorry, Rob) was right.

  • Use a stored procedure if the roundtrip savings are worth the decrease of maintainability, flexibility and compatibility. We tend to call this "the zone". "The zone" where business logic is factored tighter and tighter to your db scheme. Besides the people who wrote OR mapping tools probably compared execution plans (SQL server) and must have noticed dynamic SQL is way faster due to SQL server's crappy stored proc optimisation when precompiling.

  • I Think that after eight years of keeping stored procedures you become tired of them and you'd rather prefer your comfort than a well designed application.



    Rob's got a point on it

  • I'm surprised that noone has mentioned the merits of using SProcs in a secure environment yet. For those DB's where complete access security defines the method of data access, SProcs are really the only way to go.



    Say for example I have a DB using Windows Auth to restrict who can view what data. The DB is being accessed by three or four different apps. Now on that DB I have something like 100 tables, which I do NOT want the casual browser (using Query Analyser) to be able to view. I therefore need to lock down access to all tables. Access MUST then be made through SProcs. I know - I could just allow access to the tables through defined logins. But what if I need two different roles (Staff vs. Managers - defined by NT Domain groups) to be able to retrieve different sets of data from the same tables? And what if the access levels that particular users have are to changed? For example - a Manager can only view payslips for those employees that work directly for him instead of the whole department? That would require the code of three or four applications to be changed. I'd rather change one set of SProcs...



    The point I'm trying to make is that if you are developing an application in which any kind of sensitive/personal data is involved, or where Role-based access to data is necessary, or you are required to ensure that this data is kept secure - Dynamic SQL will not be your best friend. In these instances only SProcs will help. In fact I'd go to say that if you are designing/developing with security in mind from the start, then you'd be daft not to use the in-built security that SProcs offer you.

  • Mike: the time it takes to create all those stored procedures for each role can also be used to create the views you want. The advantage of that is that you can use views in queries (FROM clauses) you can't do that with sprocs.



    People think that in a high-secure environment, stored procedures will be much more secure AND it will take way less effort. This is not true: for each byte you want to return as a select result, you have to write a procedure. That code can also be used to write a view.

  • What about writing your business logic in a stored procedure?

  • What about user defined function which return tables (sql 2000) (view with parametars).

    Should we use it Frans,

    or we should buy your LLBLGEN Pro.



  • Depends on what you want to do with the user defined functions. afaik, you can't call them from outside the db.

  • ::What about writing your business logic in a

    ::stored procedure?



    Perfectly valid approach.



    Just dont tell people then that you follow object oriented programming - you are simply not.

  • Chess Player: I meant calling them as a stored proc directly using a Command object, afaik, you can't do that, it always requires a T-SQL statement which references the function (calling it). Of course you can call them embedded in an SQL statement, like the article shows. (the article is not that correct though, it says stored procedures are compiled and views aren't. Well, the code of a view is placed at the spot where the view is referenced. This means that a stored proc using a view contains in its execution plan the compiled code of the view. At least that's what BOL tells me.)

  • >Perfectly valid approach.



    >Just dont tell people then that you follow object oriented programming - you are simply not.



    Sometimes, you have to make trade-offs between perfectly valid OO and performance.

  • ::Sometimes, you have to make trade-offs

    ::between perfectly valid OO and performance.



    Yes, but these are fwwer than you think.



    Becaue the one thing totally ignored so far is that a good OO DAL would introduce a nice little cachin opportiunity, too :-)



    AND: the performance tradeoff is only a valid argument when necessary. Otherwise you just blow out the maintainiability of the app for a performance gain you dont use. BAD decision.



    But sometimes, for certain simple procedures, it really is good to put them into an SP.

  • :Frans Bouma

    :Posted @ 11/18/2003 2:20 PM

    :Wallym: the dramatic performance boost is not

    :there, because the stored procs are threated

    :as queries which arrive at the sql engine:

    :they are compiled at runtime.



    Frans, I respectfully disagree. My experience has been that sprocs do provide a dramatic performance increase. The issue is when is this performance boost needed vs. ease of programming.



    Thomas Tomiczek, you sound like a ranting child. I stand by my statements and the standards that I use.



    Wally

  • "Sometimes, you have to make trade-offs between perfectly valid OO and performance."

    Placing BL in de stored procs has nothing to do with OO or not, it has everything to do with separation of functional tiers. Moving BL into the procs is colliding everything together. This was done in the old days too, where queries were executed in a button handler. Worked faster, but was unmaintainable.

  • "My experience has been that sprocs do provide a dramatic performance increase. The issue is when is this performance boost needed vs. ease of programming. "

    While I most of the time agree with your opinions, I have to say this opinion of yours is just what it is: your opinion, not based on fact, just on personal experience. When I look at the theory behind the sql compiler in sqlserver, I can't conclude anything other than that a dyn. parametrized query and a stored proc run as fast, because both will use a cached execution plan. (yes, dyn. query has to be compiled first, so does the proc, which also will have to be recompiled if you're not careful (and the more code you add to make them flexible the more risk you take to cause a recompile of a stored proc on every occasion)

  • ::Thomas Tomiczek, you sound like a ranting

    ::child. I stand by my statements and the

    ::standards that I use.



    This is ok. It does not make your stnadards any moe meningfull, though. And this is what you should care about more, you know. Especially given the number of arguments you give in here.



    @Frans:

    ::yes, dyn. query has to be compiled first, so

    ::does the proc, which also will have to be

    ::recompiled if you're not careful (and the

    ::more code you add to make them flexible the

    ::more risk you take to cause a recompile of a

    ::stored proc on every occasion



    Even worse. You end up ith having a "with recompile" option on the SP pretty fast. Because all these neat tricks for handling variable numbers of parameters end up killing your performance. The cached plan will be the one selected by the first set of parameters discovered. All other requests goind in will reuse the compiled execution plan, whether this makes sense or not. This can only be avoided when you dont work with "dynamic numbers of parameters".



    What I mean are sp's like this:



    getUserData (fitlerName, filterCountry)



    where both could be null. Depending on the selection you end up with either one or two or zero where conditions.



    In a SP you can:

    * Make a series of IF statements and HAVE threee statements. Efficient, but ugly as hell.



    * Make one statement that uses conditions. Way less code and pretty nice - but basically the cached execution plan is determined by the first set of parameters.



    Rsult: With option Recompile, and there you say then "whow, I did reuse plans", becuase you basically had to kill your performance gains.



    Alternatives: three different SP's - not better than avariant one, and ugly as hell.



    With dynamic SQL the SQL generator can handle this on the client side.



    The main problem ith SP's here is that I know a lot of occasions where this dynamic thing turns around and bites you faster than you can say "I was an idiot programming this", and this is basically whenever you show a powerfull query form to the user. One where the user may be able to select multiple conditions. One like, btw, the "Advanced Find" in Outlook (on the Advanced tab, btw.). I would basically say it is nearly impossible to code someting like this in an efficient set of SP's on the database - the most efficient way to do this IS dynamic SQL, as the query is dynamic.



    So, as a summary: SP's bite you when you go dynamic in the application. Now, it is up to you to decide to write applications that empower the user big enough that you realize you just got killed by sp's. Naturlly we all do this, right? We all make configurable software, using command patterns etc. Right?

  • >>yes, dyn. query has to be compiled first, so does the proc



    What's the use of the 'WITH RECOMPILE' clause then the 'CREATE PROCEDURE' in Sql Server ?

    I also thought that you could recompile a SP in Sql Server explicitly, so why would the SP be recompiled every time it's called?



    /me needs to do some investigation on that subject.

  • Frederik: WITH RECOMPILE means the proc will be recompiled no matter what on each access: (From BOL) "Creating a stored procedure that specifies the WITH RECOMPILE option in its definition indicates that SQL Server does not cache a plan for this stored procedure; the stored procedure is recompiled each time it is executed"



    It can be better to signal sqlserver to not bother to cache the plan and to lookup the cached plan only to find out it has to recompile the procedure anyway. There are situations where a stored procedure will be recompiled anyway because of different codeblocks being executed, temp table usage in IF statements etc.

  • What's so interesting of benchmarking something over ODBC? What's reality for ODBC doesn't have to be reality for native ADO.NET providers. Example: ADO over ODBC doesn't call sp_executeSQL per se when you execute a dynamic statement, SqlServer's client for ADO.NET does.

  • That quote is based on the assumption Oracle supports as much SQL-92 standard statements as SqlServer does (and vice versa). Both have proprietry statements to do stuff (8i can't join ANSI style, 8i/9i don't have FROM clauses in delete/update etc. (delete has 1 from, not 2)...). This obviously will make an app fail.



    However I generate SQL on the fly using components that are specialized to do so per database. Thus 1 for sqlserver, one for oracle etc. The code targeting these objects is the SAME (which is the code a developer would write), the sql generated at runtime differs, because the engine used to generate it is different for oracle than for sqlserver. Using the same engine is indeed not handy. :)

  • Sorry but I agree with him.

    One more quote:

    "These may seem like surprisingly obvious statements, but in my experience, I have found that too many people approach the database as if it were a 'black box' — something that they don't need to know about. Maybe they have a SQL generator that will save them from the hardship of having to learn SQL. Maybe they figure they will just use it like a flat file and do 'keyed reads'. Whatever they figure, I can tell you that thinking along these lines is most certainly misguided; you simply cannot get away with not understanding the database. This chapter will discuss why you need to know about the database, specifically why you need to understand:"



    Remerber EJB and all that object mapping staff

    (Top Link)

    for java and database.

    Good Luck!!

    But I don't belive in that:

    "No. With an O/R mapper this is totally abstracted away. The only think that is very handy to have is a relation graph, thus which entities are related to which entities. But you can print that using enterprise manager."



  • You don't believe me? That's fine. I don't see the point in learning SQL when you are not using it at all. The definitions (very important) of what people want are the same for every database. the SQL generated is different.



    It's like claiming knowledge of MSIL is a must have to be able to write C# succesfully, or to understand how NTFS works to be able to use the I/O subsystem via System.IO.

  • Don't be angry ;).

    Let say it this way: Set of SQL generated by your tool is probably subset of what I already generated manually in my stored procedures or sql statements with parameters.

    I bet that your tool would not solve some practical problem that I solved with sql.

    If the developer uses your tool and does not know SQL in my opinion he ( or she) will sooner or later has unsolvable problem that require SQL.

    In my opinion everything you can do in IL you can do in C#, which is not true for OR mapping tools.

    C# = IL

    SQL > LLBLGen Pro







  • Well, for DML, not really. For special selects, I agree, you can never mimic RDBMS native functions well enough, however dyn sql engines can get very close. It depends on what you need and besides aggregates, you don't need a lot of specific SQL statements a lot in a *LOT* of situations. for the rare situation that you do, you can fall back on creating a view or indeed a stored proc, but these situations are rare.

  • Being quite fond of stored procedures myself, it's been a real treat to read this blog-entry and the following discussion to get some input.



    I don't believe there is only one correct way to do things, nothing is just black or white. Different problems require different solutions, and only the specific scenario decides which is better: dynamic sql or stored procedures or perhaps a mix.



    That said, I'll provide you (all) with a little more input to shoot at :-)



    Firstly, why hasn't anybody produced comparative benchmarks? How can you argue that dynamic sql should be just as fast as stored procedures with no data to back it up - or the other way around for that matter. Books online is a great source for information, but is it absolute truth? (I won't either, btw)



    Regarding the use for sprocs, it's true that if you don't write your stored procedures correctly, you may as well use dynamic sql statements. But I think this is because of a common misconception. Procedures are not a place to put business logic and as such there should be no need for conditionals, behavioural bits and the like, and therefore no need for recompilation. If you can't manage without them, your design must be wrong somehow. (Yes, I'm sure there are tons of counter-examples, but in general...)



    As for O/R-mappers, my experience is that they're not very good at handling complex relationships, and if you have to write the code that generate the SQL yourself, you may as well put the statements as procedures in the database and write a wrapper (or use someone else's - ie. Microsoft.ApplicationBlocks.Data for .net). Oh, and you can use dynamic discovery of sp-parameters the same way you dynamically discover changes to a table.



    In my opinion, the best argument against stored procedures is the "maintainability"-issue. 4+ sp pr. table/object and corresponding wrappers can get hairy to maintain and is a huge source of errors. However, the strength of the approach is that you can test the persistence part of the application systematically - this is a little tricky with generated code. If you find an error, replacement is a matter of rewriting (and testing!) a few lines of code. I guess testability and maintainability have some sort of trade-off relationship, that need to be taken into account when planning the project.



    There, my 2 cents

    Mikkel

  • sprocs are the way to go.



    1. It is a packaging issue - I do not want my

    sql code to be buried in all C# source files

    all over.



    2. I can test them and sql optimize them as batches.



    3. Read up on the 'compilation' stuff - you have it wrong.



    Plug - My tool (www.dbcodegen.com ) generates

    generates C# bindings for sprocs as well.



    Ricky

  • "3. Read up on the 'compilation' stuff - you have it wrong."

    Which part of "Quote" do you not understand? I quoted the part from the BOL. If you think that text is wrong, BOL is wrong, not me.



    "As for performance, sprocs are faster. I've done time trials. Have you? "

    Yes I have, see the links at the top. SProcs are slower in fact in my tests.



    However 'in my tests, ABC is faster' is bogus to me when there is no explanation given. BOL tells me SProcs and dyn. queries are not treated differently. Therefore one should be as fast as the other. You may think sprocs are faster, if you can't give a theoretic explanation that is TRUE, don't bother claiming they're faster.



    Code generation for stored procs is ok, but then also maintenance sucks: you end up with a lot of procs and when the schema changes, which one to remove? All of them? None? Trust me, with LLBLGen 1.x, which generates procs and classes way before Eric Smith thought it was a good idea, I've learned this the hard way: a hell of a lot of procs, all nice, but unmaintainable.

  • Yet another religious war:( I'm sick of it!



    Don't you guys get it? I'm sure you won't until you read a book like "The design and evolution of C++". You can't just say "Sprocs SUCK, long live SQL" (a guy actually said that:). You can't also just say "SQL sucks, long live sprocs!". It's all about using the one or the other when it is most suitable. If Bjarne had the "C sucks, long live Simula" mindset, C++ would NEVER be the MOST successful language! You want lean and mean near-the-iron code -- use the C subset of C++. Should you want to build an enormous higher level solution FASTER than using plain "old":) C, use the OO features of C++. You want to update 10 tables with one network trip -- use sprocs. You want to build a generic O/R mapper because you're reimplementing it ALWAYS anyway -- use dynamic SQL. What the @#$% is the problem with you guys, huh?



    P.S. To calm you down a bit:

    I like sprocs, and I like dynamic SQL. I hate sprocs and I hate dynamic SQL. In fact, I hate doing any DB-related work so don't sue me:)



    P.S. to the O/R mappers guys here: Don't try to advertise your O/R mappers in such a stupid way:)



    Cheers,

    Stoyan

  • This one is for the sproc guys:



    Are you ALWAYS the one deciding which database will be used? Isn't that again your company and your client's company politics? If the customer chooses MySQL (or Access) exactly what kind of sproc will you write?



    This is for the O/R mapper guys:



    Excuse me guys, but your O/R mappers can serve a maximum of 70% of the line-of-business applications that need to be build. I know it's OK for your companies and you could live with that, but the rest of us are building applications that are orders of magnitude more complex than Northwind & pubs, and more often than not, a generic solution is not a solution at all.



    What happens if your O/R mapper should support Access, SQL Server and Oracle? I'd rather use/invent a pattern which allows me to write specific code for each DB (but to have the same code in the BL) than use a generic solution. For example, if I have to insert N rows in a single query, I'd use Oracle's arrays, SQL Server's XML support and a good old "for" loop for Access. What about that? What about if I have to cache data? What about if I have to perform virtual locks on the data (even with the disconnected ADO.NET)?



    Cheers,

    Stoyan

  • Wow, the more I read about this... the more I just want to use object serialization and keep my data with my object and forgo the db altogether... ;)



    @Stoyan: Agreed, use of SPROCs or DynSQL really depends on the situation at hand.

  • It's ironic that this flame on SP has been done using a blog engine that is powered by SP.

  • Frans,



    "way before Eric Smith thought it was a good idea" Wow! You're turning into a regular thona.



    What I do is run a script that drops all my standard CRUD procs and then re-generate them all when the schema changes. The same thing for my DAL classes. It's called active generation. I think you know a thing or two about this so I'm not sure why you would be slamming me or my tool over it. While CodeSmith does not have the full blown template set that LLBLGen Pro does, the two products are basically the same thing (except CodeSmith is free). Someone could just as easily generate their DAL using an XML O/R mapping file and use dynamic SQL or whatever they want. That's the beauty of a template based code generator. It doesn't dictate architecture to the end user. I don't know about you guys but I will never use a code generation tool that forces me to do things a certain way.

  • This started out being interesting, but once again Thona has appeared and ruined it. Seriously, the guy is the most violent drunk I've come across in this arena. I think I'll head elsewhere ...

  • Bit of a lack of thrill here. Rant, rave, and yell, but don't actually present any facts or figures. So we are all supposed to start creating a series of roles on the database and couple that with a series of views and tack into that a bullet proof hack proof dynamic sql method just to get past SPs? Not to actually get faster than SPs, or safer than SPs, or even less resource intensive than SPs..just to please an SP hater? Yeah, right, and I just saw a flock of pigs flting north.

  • Oh yes..has anyone here heard of 'pre-compilation' of Stored Procedures?



    Also..Frans..if you quote from something to use it as a support for your arguement then you cannot then turn around and blame that source if it turns out to be wrong with no backlash on yourself. Its called 'checking your sources'. You quote it, you wear it.



    And that guy how started up about 'Access' and its obvious lack of SPs....please, give us all a break. If you want to use Access then feel free...but don't pretend speed, scaleability or any thing else are considerations. Like driving a Ford Anglia into a SuperCar rally.

  • What's the problem, Ed? what did I quote but misused?



    Also, tell me, how to precompile stored procs in SqlServer so that it would add something to the execution speed. You can't do that, claiming you can is bollocks.



    If you want to keep using SP's, go ahead. However if you want to defend yout decision WHY you want to keep doing that, use facts, instead of stories about flying pigs :)

  • What the hell are you saying???

  • Being from the admin side of the planet, I've have to support applications that both use stored procedures and don't. "In My experience" applications don't perform nearly as well and have had to purchase more equipment to scale vs applications that use stored procedures. Now that doesn't say applications that use dynamic sql don't scale as well. I just have to live the pain everyday of throwing more hardware at enterprise applications that don't use stored procedures.

  • I use stored procedures exclusively. No one is allowed direct access to the tables. Here's why:



    1) Using inline (or 'dynamic') SQL, you assign permissions for a give table for select, insert, updates and deletes. When you give select permission to a user/group/role, you allow them to execute whatever they want against the table (or tables). By using sprocs, you keep a newb from opening access (or excel for that matter) and running a cross join that takes down your entire production server.

    2) SQL injection - If you run inline sql and aren't scared of this, well, you should be using access anyway.

    3) Complexity - In a big datashop, a two table join just ain't gonna cut it. Try setting isolation levels, nested transactions with save points and doing server side performance traces.



    Dynamic SQL has it's place. Usually it's in smaller projects that don't have security, performance or ACID concerns.

  • Effective system design is never a 1-size fits all. To be an effective application architect you need a full set of tools in your tool-belt. Otherwise, if you all have is hammer, everything looks like a nail.



    Never say never. Use the appropriate technology for the job. Appropriate has a lot of meaning here, standards and common techniques are important in a large organization.



    Small one-off business needs can often be met quite cost-effectively with tools like MS-ACCESS. (Even though I don't want to be an ACCESS programmer, my personal choice is rarely in alignment with the goals of the business)



    The one thing that really gets me the most, is that when some folks talk about 3-tier design, they really only include the middle tier and the client interface, and simplify the database server to the role of 'persistence'.



    If you aren't maximizing the capabilities of all 3 tiers how can you call this 3-tier design? (Isn't it really 2-tier design that comes along with a database...for reasons that aren't very clear to some Java guys who prefer flat files!)











  • Well you argue your point very well. So long as your goal was to arbitrarily manipulate facts and data to simply propogate your own personal ideal. The problem with this train of thought, is that you are trying to place a global solution blanket over all problems and this automatically diminishes your effectiveness in the IT industry. You see, the simple truth of the matter is, dynamic sql, in-line sql, and stored procedures all have their place in todays application world. I have spent the better part of the last 6 years working on large scale web applications with million+ count user-base and I can tell you from research and real-world observation and study, that stored procedures are not bad. In two specific applications, we received a 15% performance improvement just by changing from dynamic sql to stored procedures. Despite what you say, maintenance nightmares exist for either method, you simply chose to ignore those of dynamic sql and focus on those of stored procedures. Dynamic sql simply is not as fast, is not as secure, and has just as many (if not more) maintenance issues as stored procedures. But.... some applications don't require that and based upon the needs of a particular application dynamic sql may indeed be the better option. Be careful throwing around blanket statements especially without real-world examples and observation from BOTH sides of the street. Your clients will appreciate it as well.

  • If you do not use stored procedures in my company. You will be fired immediately. I do not want sloppy code, but I want professional code.



    Be professional or be unemployed.

  • I have always developed believing that both Stored Procedures and D-SQL had its place, but recently I decided to see what the opinions of the masses were. Asking some DBA peers of mine, there opinion was to use SP over D-SQL in all cases, but not a single one of them could give a clear example why. In reading this blog I found that it seems everyone has an opinion one way or the other but there wasn't any clear-cut reasoning that would change my previous mindset. I feel SP's are necessary when doing transactions that require changing a large amount of tables or if changes/results are dependant on other changes/results etc. But I could not find a good reason for why all accesses should be made using SP. If their really was a performance increase, then I would have a good reason for using SP on all my transactions, even down to the most basic ones that occur. So I setup a small, quick and dirty benchmark for myself. I considered a drop down field that listed the contents of 1 column from a table. Very simple, and very easy to setup. I made a windows form in C# that consisted of 2 combo boxes a button and two label fields. When the button is pressed I set it to open a SP loop through the results and save the amount of milliseconds it takes to fill one of the combo boxes. I did the same using a straight SQL Query. Here are my results:



    SPROC: 328ms SQL: 15ms



    Granted the first time, it ran the SP had to compile, I went on to try it again obtaining the following results:



    SPROC: 67ms SQL: 1ms

    SPROC: 16ms SQL: 0ms

    SPROC: 43ms SQL: 0ms

    SPROC: 48ms SQL: 3ms



    Possibly this is not a good benchmark, or possibly, just possibly, a lot of DBA's are too consumed by what they think is right, they don’t bother to use all of the tools available to them. Sure SP's have their place in SQL Server, but so do Views, and triggers, and Relation Diagrams, and Jobs and Functions etc. etc. If I was to setup a benchmark where multiple tables were updated or accessed, I am sure the SP would benchmark better then the straight SQL, but that is not the point I was trying to make here. I cannot think of a single application that ever existed or that will ever exist that will not have a bunch of basic queries like the one I described above. Lets stop arguing the points besides what is better and give some good advice to people that are trying to properly setup there databases. Use SP’s when you need to, and use Dynamic SQL when you need to, don’t get stuck on just one technology when multiple technologies are offered. If you have a good solid application, there should only be 1 single object you would need to change, given any DB change, and if that class is setup properly, it should validate itself anyway, so you can change the DB to your little hearts content and still have a working application. On the other hand if you make a table change that would affect a SP, you should set a Default Value so that the SP doesn’t break.



    Please feel free to tear apart this post as much as you would like, but all I would like to see is 1 single good reason not to use both SP and regular SQL queries. More to Manage? Not if you are a good developer.

  • I strongly believe that tables are useless to database development. Here is why:



    1. Tables get big and make your database big.

    2. Tables have nothing to do with what a database is for, a database is a base for data.

    3. If you do not use tables at all, then you do not need to worry about dynamic SQL or stored procedures.

    4. Whats is a table? a bunch of Data, data should be kept in the database.

    5. With tables you have to burdon yourself with the Normal forms.

    6. No more huge relational diagrams tiing tables together, because there arent any to tie together.

    7. Reduces the need to write cleanup/archive packages because there are no tables to cleanup/archive.



    Why limit ourselves to not using stored procedures, lets just limit ourselves to not use tables, and it solves all of our database problem. See everyone, sometimes the simplest solution solves everyones problems. Now lets all hold hands and sing kumbaya.



    [And most of all........F!?k you!]

  • Here here Sheri Moon,



    It seems all these threads boil down to one thing, performace, well I cant see a real problem between 1,2 or 10 rounds trips more traffic data or few more bytes ram used when we are moving into the era where everyone househole will have T1 or more connection sooner or later, home computers tripple its speed everyone 3 months or so, servers can take even more CPU's like there's no tomorrownot to mention the RAM.

    By the time this argument is resolved, what you have learnt would become obsolete and its time to takew up a new language.

  • ... can be avoided if you don't write them on the server, as text in a text editor. Instead, you write code to create them as part of the (administrator code in the) app, generating the SP code on the fly with knowledge of the database schema. If someone decides to change the width of a column, or add a new column, or remove a column, the stored proc is re-built.



    I didn't read the entire blog, or the blogs people referenced, but I didn't see this idea. You can get a lot of the advantages of both approaches by using the same tools that build dynamic SQL to build the SQL in your stored procs.

  • The creators of SQL relational databases did not include stored procedures for their good health. The bottom line is: A good relational database provides many mechanisms for manipulating data and stored procedures is just one.



    Most importantly, none of these mechanisms is meant to be used exclusively. Stored procedures have their place at times, and so do dynamic sql statements. Neither one is better than the other at everything! Each one is better than the other at something!

  • Both Stored Procedures (SP) and dynamic SQL (ad-hoc) has its place.



    What this BLOB is about is "performance".



    Everyone has always said the SP are 100% faster then dynamic SQL (ad-hoc).



    It seems that many of you have gone away from this topic because maybe you feel that there really is no big difference in speed or performance. So you feel that you have to post stupid remarks like “If you do not use stored procedures in my company. You will be fired immediately. I do not want sloppy code, but I want professional code.” Ya know what, David B please post your company’s name so that we know of the closed minded company that own and we know to stay away from it. By far the worst post is by David, had nothing to do with the topic and shows his truly closed minded way of thinking.



    Now, many of you know Rob Howard. He is a program manager on the .NET Frameworks team at Microsoft. Specifically, he works on the ASP.NET feature of the .NET Framework. Ron is the guy that started all this chatting. He himself has stated that he was wrong in what he said on his bolb’s homepage. He went on to say that Frans Bouma's blog helped him to see the light.



    Hmmmmm… maybe if more of you “read” before saying anything you too might see that the performance between SP and dynamic SQL (ad-hoc) are just about the same.





    Stored Procedures and Dynamic SQL (ad-hoc) should be used.

    Just use them intelligently!!!!





    AND REMEMBER!!!!!!



    What this BLOB is about is "performance"!!!!!!!! So talk about the performance between Stored Procedures and Dynamic SQL (ad-hoc) only please.

  • I'm no DB guru.

    I looked at this thread to get an idea of what I should, or should not be doing.

    Well thanks Guys, now I've got N.F.I. after all the differing opinions.



    The only thing that seems clear to me is that anything that has a fanatical element (religious zealots, Football fans, one party politcal voters, stored proc haters/lovers, etc) usually results in those people being so clouded by their own bias, that they cannot see the other sides benefits.



    Frans and the like seem like nothing more than people who are intent only on trying to prove how smart they are, and not applying common sense to a particular situation.



    Now, these people probably are geniuses. The problem is that most people aren't. So if you need to be a guru to work out all the ways you can get around using Stored Procs, then you are proving even more that you are just trying to big note yourself.



    The other thing is that I must be on a different planet. I work in a LARGE IT environment. NO-ONE develops an App from start to finish. We have upto 6 different areas working on the one app. DBA's do DBA work. ASP programmers do that, etc. etc.

    If anyone tried to be a smart arse and code the thing from start to finish they'd be out the door.

    Writing SQL and the like in your apps might be fine in a Mickey Mouse environment, but if you want to follow professional IT practices and methodoloies leave the "Im a guru, and can do it all" attitude at the unemployment office...

  • "Frans and the like seem like nothing more than people who are intent only on trying to prove how smart they are, and not applying common sense to a particular situation."

    I am not trying to do that, why should I? :) I have developed stored procs for years, don't get me wrong here.



    I've however by doing that, seen too much misery in the maintenance department to keep on saying 'stored procedures are great'. They're not. They have serious issues. Until people start acknowledging them, the issues will stay and keep on popping up.



    If you'd have read the article correctly, JR, you'd have seen that I wasn't arguing about sql concatenation in an application. I was just arguing about the myths which are used to promote stored procedures.



    You don't have to believe me, but in a 15 people project team you can fire at least 3 if you move to dynamic generated (not concatenation) SQL. Reason? No maintenance hell when a stored proc requires an extra parameter and a lot of code has to be altered to make that happen (yes, in two areas at least: DBA and DAL programmers).



    You work in a big organisation. So when this has to be applied, a lot of overhead will be created to schedule the change. Are you going to pay for the money the overhead costs? Probably not. However it does cost money (and time). Money-loss which could have been avoided.

  • Nice article Frans...



    But I do have a few concerns...



    1) The application has to have a detailed model of the database itself..

    2) The assumption that "your" application is the only one hitting the database and thus will be following good techniques for access/manipulation.

    3) The assumption that the application programmer be fully aware of the database schema (sort of the same as point 1).

    4) Transaction management (again sort of the same as point 1)



    On the performance note....I agree here except for one instance.....



    And I quote BOL...(About Stored Procedures..)



    "If the operation requires a large amount of Transact-SQL code or is performed repetitively, stored procedures can be faster than batches of Transact-SQL code. They are parsed and optimized when they are first executed, and a compiled version of the stored procedure remains in memory cache for later use. This means the stored procedure does not need to be reparsed and reoptimized with each use resulting in much faster execution times."



    Of course the plan is cached for any statement and I would like to emphasis that the statement says "CAN BE FASTER" but not "ARE FASTER". A very specific case indeed for suggesting a sproc could be faster than d-sql....



    Not much of deal really, but it should be kept in mind if a database routine fits those requirements...

  • I think you are missing the point. Stored Procedures and Dynamic Queries should not be used as sole stand alone implementations. They should be used in conjuction with one another period.

  • Can't we all just get along and admit the the MS Acess DB model is tops!!!! :)

  • Hey, I just sorta started databasing and I read almost all these comments trying to decide whether I should use Stored Procedures. But I have one question for any geniuses out there.....



    How the hell do I stop injections without using stored procedures?



    And I don't want twenty replace statements....



    Very interested, Micah

  • Oh yah, and my department doesn't have .net

  • Micah Peterson, check out parameterized queries, they should solve your problem nicely if you really intend to live life without SPs.



    Personally I also totally disagree with the writer that dynamic/embedded SQL are better than stored procedures. Stored Procedures are definitely the way to go for run of the mill business applications, but one must admit (like some other posts have) that there are indeed cases where dynamic SQL may be better suited. Like all things in the world: “It depends”

  • Stored procedures ROCK!!

  • Two more cents.



    I say use them both. Each technique has its strengths and weaknesses in different situations. I will admit for years I have been a victim of thinking that sprocs are so much faster than dynamic sql. After seeing the excerpt from bol I was quite shocked.



    My experience.



    I have been doing the old web database application development thing for 4+ years now and early on I used mostly dynamic sql statements in my apps. This was pretty easy considering the code was asp pages and could be changed without sending them through a compiler. On the other hand the sql statements that were large were ugly and somewhat difficult to maintain.



    One thing I keep hearing about is creating an sql generator. I have never seen one. Have you? Maybe I haven't because I have never looked or even thought about building one. SOunds like a good idea though!



    Now days I use mostly sprocs.



    Why?



    Easy to maintain security on sprocs. My apps are built for internal company operations we have a company wide active directory I can use security groups from the AD on my sprocs and then hand the adding and removing of users to the AD groups to my Help Desk. This is one less thing I have to do.



    I also like the fact that I don't have to write code for sql injection attacks. Once again, one less thing I have to deal with.



    One disadvantage that I have seen is the domino affect that happens when you make a change to your data structure. Going from a varchar(5) to a varchar(10) (because business requirements change) especially if the column is a primary key with many foreign key references can create a very big headache. Would this be the same if I were using dynamic sql. Yes, but not to the same extent. Mainly cause your not data-typing parameters in dynamic sql statements.



    In Oracle, there are keywords, %TYPE and %ROWTYPE, They allow you to datatype a sproc parameter to whatever the current type a column is. So as the database changes the parameters datatype changes automatically. No domino affect! Oracle also has whats called named notation for parameters. This allows you to specify parameters in any order you want. So as the sprocs parameter list changes your code doesn't break.





    Do I always use them, hell no!



    I have a few instances where I have needed the flexibility of a dynamic sql statement because of the amount of sorting and filtering combinations needed. Having to code this flexibility into a sproc or many sprocs would have been very difficult and taken for time.

  • I have to agree more with Frans idea.

    Dynamic SQL is a very very powerful programming tool.I have written A Business Platform and we use it in our Company, and it dramatically reduces development time.For example with each 'Data Entry Form' you design, you get Loading of records in a list,deleting records,automatic completion of 'primary keys' data entry, records navigation on different fields (like records creation date, or records primary key...). all these features come for free when you simply design the form, just by including one record in the database for each 'data entry page' you design. This record basically contains the tablename, table primarykey, and alternate keys for navigation. That's it. However I will not agree with Frans 100%, the extra layer-Of-Indirection that a SP gives you is very important sometimes in your programming model, but performance is not what I look for when I program SPs.

  • I would say it is based on the requirement, if the requirement calls for a hugh application and strong enough to support on cost and dynamic sql will benefit on the long run for re-use(the so called) but for small business oriented applications I believe in sp's and they are faster in development and also caters in terms of resources, easy maintenance and makes the life easier.

  • You miss a big point

    Say you want to begin a transaction, insert a record into several tables, end trans. If it's in a SP, you pass all the data to insert as parameters. The data travels across the network or internet THEN after on the server, begin trans starts and you do all your inserts without network xport between them and end trans.

    All data on the server before I start a trans is why I use SP.

  • I understand that existing systems have stored procs for inserts, updates and deletes, so you can't expect people to go and redo their work.



    I am a firm beleiver that you use the technology that will be the easiest to maintain in the long run for a specific job NOT the technology that will give the best results neccesarily. Maybe i am wrong, but as a developer i will be the one doing the work in the end.

    For example Assembler and C++ is faster executing code than .Net CRL - Why don't we all use it then? Because it is harder to maintain and takes longer to do the job, So it is NOT the best tool for the job. Now why should communication to a database be any different?



    Okay - Network traffic will be more in a complex piece of LLBLGen logic if your DB and Web Server is two physically different machines - so ask yourself is LLBLGen the best tool for this specific section of the job?



    There are cowboys that want to hard code SQL Statements in their command objects and write Sproc for every simple DB I/O operation (This does seem to be what microsoft invisioned if you look at the .NET CommandBuilder).



    Rather use the tool that does the job the best.



    For selects, insert and updates - use dynamic SQL.

    For Complex logic that requires information to be gathered from numerious tables with huge updates - use Sprocs if network traffic could be a problem,



  • I really doubt anyone here has built any apps that would run any differently if they used stored procedures or not. lol!

  • Am I must be missing something. If you have over a half a dozen fields that can be choosen on a form, regardless of whether or not one uses Stored proc or O/R mapper (with parameters), the plan will never be cached, right?

    select where thisdata

    select where thisdate to thatdate

    select where company and thisdate

    select where saleman, company, thisdate

    and on and on -

    O/R is just less code headache supposedly , yes

  • "Now, let me add something about performance here. Say, you have a database with 100 tables, with an average of 7 fields per table. This requires 100 update stored procedures, at least (the crUd procs). These 100 procedures will have on average 7 parameters to update all fields at once, because you can't create optional update statements in stored procedures. You can't create stored procedures for all possible combinations either, that would require 100*7! procedures, which is quote a lot. A component which will create the SQL on the fly will tailor the UPDATE query to the job it has to do, for example updating that single varchar field in a row with 2 blob fields. The blob fields aren't passed to the database again, because that field isn't updated. Another nice thing about creating SQL on the fly is that you can create the WHERE predicate on the fly. Foreseeing every possibly WHERE predicate for an UPDATE stored procedure is undoable. You can fall back to optional parameters, but that will do you no good either, because COALESCE isn't that great when it comes to performance"





    Somewhat of a false statement. You can create a single sp for selects and one for updates that will allow you to pass in or get back only the records you need.



    Example



    Create proc _usp_me



    @iServiceID int = null

    @iServiceNm varchar(50) = null



    as



    set nocount on



    select

    ServiceID

    ServiceNm



    from

    tblMe



    Where (IsNull (@iServiceID, ServiceID) = ServiceID and

    (IsNull (@ServiceNm, ServiceNm) = ServiceNm

  • Stored procedures are great, if you don't count maintainability or portability. Coding business logic into the database via SPs makes about as much sense as coding it into Web pages with ASP/JSP/etc.



    A database is a datastore, not a business engine. Or is it? Why not code everything in the database? You could write 80% of many applications as a stored procedure. Why not take that approach and virtually eliminate round-trips to the database?



    What do you do if code your biz logic in stored procedures and then realize that the answer to your performance problem is to process the data in-memory instead? You re-write. Call me lazy, but I hate to re-write.

  • I've been working with SQL Server off and on since 1991. I never wrote a bunch of CRUD sprocs. To me, that always had "maintenance nightmare" and "double work" written all over it.



    Sprocs are great for some things. I just wrote one that figures out the status for a piece of equipment and when it's next available. A wonderful example of putting a little bit of intelligence on the server to answer a question that can't be answered by looking directly at the equipment or schedule tables. I have no stored procedures for updating those tables. There's something called Structured Query Language which is made just for that.



  • Ok,



    I must disagree as well. Yes it is slightly more difficult to change a table when access is implememented thru stored procedures. But, it does minimize the risk of buffer overruns and mailicious code being executed on your database server. If performance is what your looking for then buy a bigger box. Security must be the highest priority then maintainability then performance.

  • I jumped into this article when I was in a dilemma to use Stored procedures or dyanamic sql's. Well after reading the arguments of both the sides I am back to square one. If these forums are a failure in providing a novice with a right direction, then these forums are just a spot where ppl with knowledge just discuss for the sake of argument and nothing else.

  • I hit this link because i had an argument with my pair programmer just because he wanted to show me down. I think this thread lists all the good and bad things of using and not using SPs. You can figure out what is best for you. Use your brain, dnt just follow. I have used both ways in different projects. I think where i did not want my application to expose logics to DBA's or my client i used statements in my application and when i knew product was for my employee and he had rights to have control over the codes, i used SPs.

    I think Databases should have classes or layer of business objects allowed to be attached to them. this is the only way i will feel safe while coding.

Comments have been disabled for this content.