February 2004 - Posts
Apparently, my post on Distributed Data Security is getting some attention from Ian Griffiths and John Lam, two people I very much respect. I asked for feedback, and now I am getting some.
I feel it is time for further clarification. When I wrote that piece I had in mind some of the questions others had asked about the use of Enterprise Services, the lack of good material on distributed .Net computing, and what security has to do (if at all) with distributed computing. I also had in mind work with distributed computing which uses Oracle, Enterprise Services, and ASP.NET for a high transactional, distributed database application. Of course, I also had in mind other thoughts regarding what was (in the unmanaged world) versus what is (in the managed world) as I am working between the two as well.
Ian mentioned use of integrated security with SQL Server, and that is a good strategy. But, if not done correctly, you could kill your connection pooling. Also, I work with Oracle, and can not use integrated security, so I do have to store credentials somewhere. Oracle also forces you to install most of the client tools by default (like SQLPlus) and keep database instance names in open files like tnsnames.ora. For me, that's unacceptable from a security viewpoint. There are ways around these issues, of course, but there are business requirements and restrictions that can come into play as well. Not all is black and white.
A point I would like to make regarding my own article is that I mentioned the importance of evaluating your own architecture and business requirements:
In conclusion, you must decide how you want to secure your data. Consider all the options and requirements for your particular business and/or architecture, and weigh the advantages and disadvantages.
This is key. The aim of my article, originally, was to point to various data security options, and for the reader to make some choices based on what is available, what was previously required (in some cases, there is still a lot of legacy code out there that won't go away for some time), and what is available. All of these options, of course, work with other options and choices, as I mentioned:
When dealing with distributed computing, there are many design considerations for your architecture regarding scalability, throughput, performance, physical deployment, security and technologies to choose.
Nothing should be done in isolation. And, realistically, it never is.
Both Ian and John point to my statements that you “should” use separate boxes for increased security (in particular, “security in depth”). Again, I would like to clarify that my intention was to offer alternatives IF the requirements force you to have that kind of environment (as Ian mentioned in his post regarding a possible need for two boxes when you have distributed databases involved). I would like to restate again, and make it clearer, that my intention was to point to the security options available if you do have the architecture with two or more boxes.
A bigger question, I believe, in regards to security in depth using layers, is do you make those layers logical or physical? Of course, you code logical layers (in most cases) as that is a good design. John correctly points to simplicity in security, and I agree. Separating layers physically does bring a lot more complexity, as seen by my article. I apologize for implying security by physical separation is an “end-all“ solution. You always have to weigh all your options and determine what works best for your situation.
Think about your options, try to keep it simple, but alway keep security in mind when working with your architecture designs.
I noticed yesterday the article on Throwing Custom Exception Types from a Managed COM+ Server Application that I blogged about previously is finally available publicly.
This past week, we started implementing the first solution mentioned in the article, and things are working quite well again. It's nice to be able to throw a custom exception from an ES/COM+ Server Application and see it handled by the client! The problem we were having was that once you go over the ES/Server App boundary, the custom exception information would get lost and the base exception would get thrown instead. It has to do with how ES/COM+ is using a combination of .Net Remoting and COM Interop over DCOM to marshal the exceptions.
As I mentioned before, please read this article to get a really good look at how Enterprise Services works underneath. I can remember spending hours using ILDASM, Anikrino, and Rotor code to try to figure out what was going on underneath. I figured out you really need to know and understand Advanced .Net Remoting, COM Interop, as well as basic COM+ to figure out the plumbing of Enterprise Services, and this article confirmed that and a lot more. Of course, Enterprise Services hides a lot of that, as it should, but you still need to be aware of what's happening when you cross the wire.
Adrian Batemen wonders if I meant you would expose the data components at the point of physically separating tiers:
First of all, I'd be reluctant to make the distribution break purely at the data access level. For me, the data access tier is all about dealing with the storage of entities. Each component deals with only one entity and as such each method only reads or writes to one entity type at a time. (By entity I typically mean the nouns in your system and these tend to tie to the main tables in a database - things like a person, product, or order.) The next layer up uses business rules to combine these entity operations together into meaningful business transactions (e.g. creating an order might create an order entity and add line item entities to it, etc.). I am more inclined to provide a distributed service using a business facade over these business rules and for that to be the security barrier. This helps to ensure that data integrity is maintained by the business rules and promotes reuse of the service in more robust manner. As I said, I'm not sure if this is what Robert means or whether we differ here.
I apologize if this wasn't made clearer in my previous post. I always advocate some kind of facade layer (could be a business rules layer, could be something else) over the data components. In fact, the data components should be hosted in-process to the server machine only. This makes sure, in our example, the web server components can never directly call the data components hosted on another machine.
In response to comments from Adrian Bateman, Tiago Pascoal, and others I would like to continue and elaborate on “Enterprise Distributed Computing in .NET”. Feedback is welcome.
When dealing with distributed computing, there are many design considerations for your architecture regarding scalability, throughput, performance, physical deployment, security and technologies to choose. Of these, I am offering some ideas regarding data security, and in particular, distributed data security as it relates to current (and future) .Net technologies.
Local Data Storage and Security
If you develop an ASP.NET or Windows Forms application, typically, you use some kind of data store. That data store could be an XML file, a small MSDE database, or a full-blown database like SQL Server or Oracle. Depending on the nature of how sensitive the data is (i.e. if this data was obtained by someone else, would they be able to compromise your business?), you must make decisions regarding how to secure access to that data. You could decide to keep everything open because the data is read-only and doesn't contain information like passwords, credit card numbers, etc. But, read-only data should still be secured so the right people read it. And, of course, if the data DOES contain important “not for prying eyes” information, you definitely want some kind of secure access.
If you store your data with your application (on the same box), you must take precautions to make sure the data is both accessible to the application as well as secured from tampering. This may involve using ACLs to secure access to a file (i.e. XML or other file data) or running database services like SQL Server and Oracle with low-privileged users and well-defined access control to only needed stored procedures, tables, and views. You may also use encryption, one-way hashed passwords, and salt values to further secure internal data to make sure it’s not easily read by outside observers. Also, you should encrypt your connection string information as well in order to make it difficult for an outside observer to break into the database.
Remote Data Storage and Security
If you require further security for your data, you should host the data store away from the application and onto another box. Most applications have the ability to access remote data stores (i.e. SQL Server, Oracle, etc.) across the wire using standard network protocols. In .Net, you can use System.Data.SqlClient, System.Data.OracleClient, etc. to make your remote calls and things work nicely. Depending on your security requirements, this may be enough, and it is quite common.
One important issue with hosting databases locally or remotely and connecting to them from the local application is database connection information must be stored somewhere near the application, either in the registry, an XML configuration file, or (ugh) inside the code itself. As mentioned above, you should encrypt the information, but then you have to figure out the best way to store the encryption keys on the same box (I know with DPAPI you can go one step further and take advantage of user authentication so you remove the need to manage encryption keys, but this has some considerations as well).
Another step in further securing your data and what I advocate is to move the actual data access layer for the application onto another box in order to promote security in depth. By security in depth, I mean you introduce more secure layers that someone would have to “break through” in order to get at your data. By moving your data access to another box, you remove the need to store database configuration information on the local application box. Of course, when you host remote components on another box, you have to figure out how to access those components from the application. In .Net, there are (at the moment) a few well known methods: Remoting, Enterprise Services, and Web Services (you could also use sockets, but that is another topic).
.Net Remote Data Components and Security
Hosting remote components brings up another important issue. You have to make sure the communication between the two boxes is secure, because otherwise your data could be seen through a network listener. This is true if you access your remote database directly through the application – anything sent over the wire in clear text is available for prying eyes. In the rest of this article, I will detail some of the options available through the three .Net methods mentioned above for secure remote communications.
.Net Remoting is a good replacement for DCOM (from the unmanaged days, DCOM (Distributed COM) is a way to call remotely hosted COM components) as it provides a way to cross .Net AppDomains, whether on the same box or remotely to another box. Remoting is also extensible, unlike DCOM. The nice thing about DCOM, though, is security was built in using RPC encryption. Out of the box, Remoting doesn’t come with security built in, and you must either use SSL/ASP.NET security if you use an HttpChannel or IPSec if you use a TcpChannel. Though a TcpChannel provides better performance, you should only use this in a guaranteed secure network environment. Other alternatives are rolling your own authentication and authorization scheme, and there are examples on the internet for how to do this (like this one).
Enterprise Services is essentially a managed wrapper around the COM+ services. If you need them, Enterprise Services/COM+ provides distributed transactions, role-based security, object pooling, event services, and many other benefits. If you don’t need these services, you could look at other options instead. One benefit that Enterprise Services provides in regard to secure remote communications is it uses the built-in facilities of DCOM security underneath. Enterprise Services, when using Server Applications hosted remotely, uses .Net Remoting to cross AppDomains with DCOM as the remoting channel. DCOM provides RPC encryption (or PrivacyPackets as it is called in the COM+ settings) and Enterprise Services makes use of this. When hosting Enterprise Services Library Applications, .Net Remoting is used to cross local AppDomains as all Library Applications are all in-process calls.
For Server Applications, Enterprise Services combines and extends Remoting (as mentioned above) and uses COM Interop to make use of the DCOM services. For its raw performance and security (because of COM+/DCOM), it’s actually a good choice for secure remote communication. But, it has its drawbacks. As a result of COM Interop, you can have problems marshaling certain .Net types. Also, as a by-product of DCOM, you may have to open certain ports in your firewall to allow standard RPC/DCOM calls to go through (though, you can lock down the port range). Like Remoting, Enterprise Services should be used with local application (same box) or middle-tier components only, and not hosted across the internet. Again, consider Enterprise Services if you need some of the other services mentioned above and not only for secure remote communication.
Web Services provides the ability to communicate remotely to another box that is sitting within your network or across the internet. For interoperability with other systems, it is the ideal choice as it has become (becoming?) the industry standard way of remote communication across different platforms. Using XML/SOAP as the message medium, you can make remote calls from your application. As far as security, you do have to make sure your message has some measure of encryption (in particular, the most sensitive parts). Typically, your Web Services calls are open text, but you can secure your calls using SSL (like Remoting when using an HttpChannel). Also, some extensions to Web Services like WSE (Web Services Extensions) and other initiatives (WS-Security) are providing ways to further secure communication using Web Services (WS-Security has already been implemented in WSE 2.0).
For the best ways to secure any of these methods, please look at these checklists (and related articles):
Checklist: Securing Remoting
Checklist: Security Enterprise Services
Checklist: Security Web Services
I can’t do justice to my coverage of distributed data security without also mentioning the future of .Net called Indigo which takes the best of all three methods (Remoting, Enterprise Services, and Web Services) to provide remote communication services that include performance, security, transaction support, and a host of other service oriented architecture (SOA) benefits. This is the future, and one that is anticipated to help take us into the next phase of distributed computing.
In conclusion, you must decide how you want to secure your data. Consider all the options and requirements for your particular business and/or architecture, and weigh the advantages and disadvantages. Hopefully, though, I have presented a strong case to consider distributed data security when establishing the physical architecture and deployment of your project.
Today, our team made a trip to Softpro Books in their new location in Waltham, MA as we haven't been for awhile. If you are in the area, definitely stop by as the new shop is still a developer's paradise!
I picked up a newly published book on security: Security Warrior by Dr. Cyrus Peikari and Anton Chuvakin. Dr. Peikari was also co-author on another book I have read and enjoyed: Windows .Net Server Security Handbook (published right before Windows .Net Server was finally renamed to Windows Server 2003).
Here is a short description of what Security Warrior is all about:
What's the worst an attacker can do to you? You'd better find out, right? That's what Security Warrior teaches you. Based on the principle that the only way to defend yourself is to understand your attacker in depth, Security Warrior reveals how your systems can be attacked. Covering everything from reverse engineering to SQL attacks, and including topics like social engineering, antiforensics, and common attacks against UNIX and Windows systems, this book teaches you to know your enemy and how to be prepared to do battle.
Security Warrior places particular emphasis on reverse engineering. RE is a fundamental skill for the administrator, who must be aware of all kinds of malware that can be installed on his machines -- trojaned binaries, "spyware" that looks innocuous but that sends private data back to its creator, and more. This is the only book to discuss reverse engineering for Linux or Windows CE. It's also the only book that shows you how SQL injection works, enabling you to inspect your database and web applications for vulnerability.
The book is published by O'Reilly. They have made a sample chapter available as well: Chapter 2: Windows Reverse Engineering. I will post comments and/or review after I have read the book, but so far, this looks great!
Sam Gentile posted an excellent article on the lack of real .Net distributed application development and examples. Others have commented on this article as well.
Sam and I have talked about this a great deal in our own work, and we have bounced ideas back and forth regarding how to create good distributed architectures. One reason I favor multiple boxes, beyond some scalability benefits, is SECURITY.
What happens when the web server is compromised, and your database credentials are sitting there open for anyone to look at? What happens when the web server is compromised, and someone looks in the registry at the DSN settings to see where that database is located, and how to access it?
My problem with many n-tier examples is that while they are getting better at separating the logical tiers, there is nothing about how to separate the tiers physically. It can't be done easily, because everything is coupled with the web.config file.
Speaking of security, how many examples show you how to create a Partial Trust ASP.NET page in order to isolate the web application from Full-Trust resources? I only count one or two. How many examples have I counted that defaulted “sa“ as the database user, without explaining how bad this really is? Unfortunately, many. Remember those basic security principles: security in depth, low-privileged user, etc.
There are more reasons than scalability to physically separate your tiers for development. As Sam said, distributed computing is your friend.
My friend Sam Gentile
added to my library of computer books by giving me nearly 1000 of his older books covering C++, COM+, Object-Oriented Programming, and other technology subjects. Sam has been going through some spring cleaning, and he knew how much I enjoy books as much as he does. Thanks Sam for the extremely nice offer!
Sam Gentile details some of the problems our team has run into over the past few months (not just our team, but others as well) regarding the use of custom exceptions in Enterprise Services / COM+. The problem is that custom exceptions were not easily travelling across the ES/COM+ wire. In particular, custom exceptions were reverting back to their base exception when thrown from an ES/COM+ Server Application. We figured out the “why“ (as Sam mentions in his post), but we had no good solution to get around it.
Sam mentions a March, 2004 MSDN Magazine article by Bob DeRemer titled Throwing Custom Exception Types from a Managed COM+ Server Application that shows a couple of nice solutions to this problem. The article is not yet available online, but the code is available for download here: MSDNMag0403 code.
For one of my demos last night, I demonstrated how a cookie can be stolen from one site and posted to another site to be recorded for later use. I did this using an ASP.NET 1.1 page. I had believed/assumed as others did that Cross-Site Scripting (XSS) was caught and dealt with in ASP.NET 1.1 once and for all. But, as I learned from Keith Brown last week, there is a bug in ASP.NET 1.1 that allows you to bypass the XSS checking. If you add an URL Encoded null value in the script tag (i.e. <%00script>) you can bypass the checks and retrieve information. Just like with ASP (for now), you still need to HtmlEncode your input (remember -- Do not trust user input. The rule hasn't changed!).
Kirk Allen Evans blogged about this last November: (Update: G. Andrew Duthie blogged information about a hotfix for this. NOTE -- As with most hotfixes, there are constraints on its use. The best defense is to use HtmlEncode regardless of the availability of the fix, as well as testing for valid input and rejecting the bad).
ASP.NET 1.1 ValidateRequest Security Flaw
From the DOTNET-WEB list on DevelopMentor:
Monday, September 8th, 2003
As part of Microsoft's attempts to make it easier for application developers to write secure code, Microsoft has added a new feature, named Request Validation, to the ASP.Net 1.1 framework. This feature is provides out of the box protection against Cross Site Scripting and Script Injection attacks, by automatically checking all parameters in the request and ensuring that their content does not include HTML tags.
WebCohort conducted research of this new ASP.Net feature, in order to determine whether it actually provides protection against Cross Site Scripting and Script Injection attacks or not.
The ASP.Net request validation feature has an implementation flaw, which allows an attacker to easily bypass the content restrictions, possibly exposing the application to Cross Site Scripting and Script Injection attacks.
Our research shows that the feature consists of banning all strings of the form <letter from the content of parameters. Hence the string "<script>", "<img" and even "<a>"are forbidden while strings like "</script>" are allowed. When the server encounters a forbidden string in the content of a parameter it issues an error message to the client.
As a result, WebCohort's Research Team was able to find a simple way to bypass the filtering mechanism. This is done by placing a NULL character between the less-then mark and the first character of the HTML Tag's name. Since this is no longer recognized by the request validation feature as a valid opening tag, it is ignored. However, many browsers, including Microsoft's IE 6.0 disregard NULL characters in their input.
Hence when the string in interpreted by the browser it is interpreted as an HTML tag, effectively yielding a Cross-Site Scripting (or Script
The exploit is done by simply adding a URL Encoded null character to the request sent to the server. For instance:
Do not rely on this feature for Cross-Site Scripting or Script Injection protection. The only effective method to avoid such attacks is performing HTML encoding within the application code itself.
Microsoft was approached on Thursday, August 21st, and acknowledged the problem the same day. According to Microsoft Security, an all-purpose (non security) software update, due to be released in a few weeks, will solve this problem. Since no preview of this update is currently available, the update has not been tested by WebCohort Research.
For those interested, I have made the Secure Coding: Best Practices presentation slide deck available on my website. You can download it from the link below:
More Posts Next page »