Sunday, 31 January 2010

(circa 2006) Why Novell should take on the 'type-safe platform' challenge

Following a recent twitter thread with Miguel de Icaza, I (successfully) googled up a older post to the SC-L list written in 2006,where I tried to make the business case for Novell to invest in CAS (Code Access Security). After reading and realizing that (sadly) most of it is still relevant today, I'm reposting it here (to make it easier for later relinking)

(for more CAS and Sandboxing related ideas see this PPT 'Making the case for Sandbox v1.1 - Dinis Cruz - SD Conference.ppt ' and this blog post 'Past research of Sandboxing and Code Access Security (CAS)')

Here is the SC-L post that I wrote in 10 May 2006 (link to original which was a reply to Gary's Microsoft's Missed Opportunity Dark Reading post):

Dinis Cruz wrote:

> The ones that I wish were listening are Novell and the Mono project.
> The path to a type-safe platform could start there.

Following this comment made on the previous thread, here are the reasons why I wished Novell and the Mono project where listening to that conversation (note: an edited version of this post was sent directly to several Novell contacts who asked me 'What is it you wish we would listen to?' :

---------------------------------

Dear Novell

What I meant by my comment, is that there is an opportunity today (2006) for somebody (namely a company + community) to really grab the 'type-safe' + 'sandboxing' flag and run with it.

Here is a quick analysis of where we stand today:

    - Vista failed to deliver a OS based on a type-safe platform
    - 99% (or close) of the .Net Framework and Java code is executed in an environment with no sandbox (i.e. executed: a) in Full Trust, or b) with the Security manager disabled, or c) with no verification). Given the amount of code deployed out there, there is no chance that a real change will occur any time soon. Currently there is no interest from Microsoft or Sun to address this issue and invest the time, energy and resources required to solve it.
    - Microsoft failed to make the paradigm shift from Full Trust to Partial Trust when they released v2.0 of the .Net Framework (which would had been the perfect time to do it)
    - There is good grass roots support for type-safety
    - There is a growing need to create secure and trustworthy applications (with growing support from Governments, Large Corporations and ultimately the end users)
    - Sandboxing at the OS level, like the one in Vista's 'Integrity Level / Privilege Isolation' and in Suse's AppArmour (sorry Crispin for not replying to your posts on the previous SC discussion about Sandboxing (it is on my to-do-list)) will NOT prevent exploitation of the user's assets (like for example the user's email). These techniques are designed to 'control' and 'Sandbox' unmanaged code, which is something that I don't believe can be done today. A short term solution (before we get to type-safe OS) would be to have environments like these (which do add some security protection to the OS) supported by a managed/verifiable environment responsible for executing the managed/verifiable  (potentially malicious) code.
    - Apple has an amazing OS (which I am using at the moment) but doesn't seem to be focused on type-safe / sandboxing issues too. Apple also seems to (like most of the Open Source community) think that it is immune to security vulnerabilities (just look at the way they handle security patches at the moment)
    - Novell has gained a huge amount of respect for its support for the Mono-Project and for its support for Open Source
    - Basically, Microsoft has lost the plot on Security and (as Gary McGraw says) is too focused on bugs and not on architecture. They (Microsoft) will have tough times ahead when Vista proves to be as secure as XP SP2 was.
    - IBM has seen the future and is re-organizing itself around the concept of 'delivering enterprise solutions on top of Open Systems and Open Architectures'

So, like I said above, there is a big opportunity for an Open Source project, lead by a major company and based on a solid platform, to lead the way in the move from unmanaged/unsafe code (where I am including Full Trust .Net code in this category) to managed, verifiable and type-safe code (which can be safely executed in Sandboxes and malicious activity easily detected / mitigated)

Novell and Mono fits this bill perfectly.

And it would also give mono an unique point of sell, since at the moment it is still a 'pour cousin of the .Net Framework'.

Ultimately the goal would be to build an OS on of top of a type-safe platform. But before that the user-land world needs to be conquered.

A lot of research and effort must be placed on how to create powerful, feature-rich and fast GUI applications built on type-safe code. This is something that can only be done by a large community focused on a powerful goal: creating secure applications for execution on secure/sandboxed environments.

Imagine if this idea could be developed to such a state where (on Windows) it would be safer to execute C# applications on Mono than on the .Net Framework itself! (another area where mono could do really well is in Hosting of Asp.Net applications (for example based on a Linux distribution of a LAMM environment, hosted by a VirtualServer or VMware host))

I believe that we are watching today the limitations, of both Open Source world (with its 'many eyeballs') and Proprietary Code (with its Secure Development Lifecycle) to create code that doesn't contain critical security vulnerabilities (i.e. both can't do it (with maybe some notable exceptions)).

What is needed is a new paradigm (well not that new if you ask Gary McGraw) that creates a financial-model that rewards the companies that are able to create secure applications that can be executed on secure environments(the idea is not to prevent bugs/vulnerabilities from existing, but to prevent the damage caused by their exploitation).

Ultimately all source-code will have to be released and made public (not necessarily on an Open Source format, but at least available to peer review and external (i.e. independent) analysis) , and again here Novell and the other Open Source development companies have an advantage.

The other major asset which the Open Source distributions have (and one which will be crucial in the future) is the centralized distribution of Software (i.e. packages). In the future we will need entities that certify the security of Software applications, which in an unmanaged-code world (for example: C++ & Full Trust .Net ) is almost impossible to do (i.e. say for sure that Application XYZ does not contain a keyboard hook and direct access to the Internet), but quite possible in a managed, type-safe and verifiable world.

Of course that more CLRs (with custom GC, Security managers, Class loaders, verifiers, etc...) will need to be build, since the requirements of a powerful Windows Application, are very different from an Asp.Net Form, which are very different from a Device Driver.

Looking forward to your comments,

Best regards,

Dinis Cruz

Wednesday, 20 January 2010

OWASP for Charities: Haiti relief effort

There are days that I am really proud of being part of the OWASP community, today is one of those days :)

The email below was just sent to all subscribers of an OWASP mailing list or has an @owasp.org address (about 10,000)



OWASP Members and Supporters,

OWASP was founded, and is supported as a non-profit organization, by a group of dedicated volunteers who believe that all applications should be secure and trusted.  As our organization matures we have taken those beliefs broader, and have started setting up ways for our members to donate to the global community.  Among these initiatives are:
  • OWASP has an active Kiva lending team who have donated $9,125.00 to date.  http://www.kiva.org/community/viewTeam?team_id=522
  • OWASP in response to the need in Haiti has set up a secure and trusted way for those within the OWASP community to donate funds to help the people of Haiti. This allows our OWASP community to help another with a single global voice.  100% of the collected donations will be transferred directly to victims for disaster relief such as food and medical requirements.  Please visit www.owasp.org and click the link for G33k-4-HAITI.  In a time of crisis, OWASP can help those who are in great need. The OWASP community can help organize, support , and promote efforts outside of application security.
OWASP is well aware there is a movement for phishers to utilize this tragedy to get unsuspecting people to donate to a “cause” without having a legitimate business back end and ultimately funneling all the money directly into their own pockets.  The OWASP community is uniquely qualified to help protect from this type of attack and educate about attacks as well.

As the world becomes more dependent on technology and particularly web applications, there are many who need protection who simply have no options to protect themselves.  These include small companies, individuals, charities, and others.  The OWASP community can help by connecting qualified, trusted resources willing to volunteer their time to those organizations which qualify. OWASP is setting up an outreach program, which will be under the name project name of OWASP for Charities.

We hope you will support OWASPs efforts to make a difference  in any of the above ways. We are also open to suggestions in regards to where you feel the OWASP Community can be of service.

Regards,


Your OWASP Board


Kate Hartmann
OWASP Operations Director
9175 Guilford Road
Suite 300
Columbia, MD  21046

Wednesday, 13 January 2010

A couple more comments on ESAPI and ESTAPI

Following my recommending ESAPI? post, there has been some great answers and comments (on both SC-L and esapi-user list). Here is a great recent blog post on EASPI : What is the ESAPI?

To help to clarify why I asked those questions (and my chain of thought) here are two extra entries with my personal opinion. The first one is a post that that I started writing a couple days ago, and the 2nd one,  is a response I just posted on the SC-L and esapi-user mailing lists: 




1st one
(note that some of the questions asked here are already answered in some of the comments to the original post)

My position is that ESAPI is a great example of what an enterprise security API should look like. But we have to be very careful when we say 'use ESAPI', because we risk that people actually use ESAPI on their applications (i.e. download the jar and copy it into their application, and use it)

I think we need to be careful to do this recommendation for several reasons.  The first one is that when we say 'use EASPI', we are basically positioning ESAPI in competition with all the other frameworks (including J2EE in this case, for the Java side of things).

The second one is we have to provide much more visibility to the bits of ESAPI that are ready to be used in production.  My understanding is that there are a lot of modules in ESAPI and not all of them have the same level or quality or readiness.

Some of our target audience is going to be developers without a lot of experience in application security, but who want to implement their security controls/features right, So we have to be very, very clear, whether we are providing them with the right advice on ESAPI, namely on which modules are actually enterprise-ready. (for ex: Is it just the encoding module or the logging module?  Or is the authentication module also ready for enterprise or application use?)


For me, part of the solution is to explicitly break ESAPI into three parts.  


The first one is the interfaces.  Basically, the security controls that an ESAPI API or ESAPI compliant framework should have.  

The second one should be a reference implementation(s), which is what we have today (and is the one that should be commercially supported by third parties).

The third one, are unit tests for the interfaces.  This is a bit where I'm actually quite interested, and I call these the
ESTAPI, which is the Enterprise Security Testing API.  



I think that is a really valuable asset that we need to implement asap (staring by extracting what is already inside the ESAPI implementations). With ESTAPI , we will have a particular set of tests for a particular 'security sensitive behaviour', for example: "...If you are doing encoding for HTML, this is what you should do.  If you are doing encoding for Javascript, this is what you should do.  If you are doing encoding for HTML attributes, this is what you do.  If you are doing encoding for exec inside a Javascript block...., if your are doing authentication...., if you are doing authorization, ... if you are implementing a password reset... ,..etc., etc.  ..."

With the ESTAPI,  I'll be able to run these tests against all supported frameworks, whereby for each framework, all I need are little connecters between the ESAPI interface and the particular framework functionality/behaviour. Then we will be able to test the frameworks for their security capabilities.  I think this will actually provide a much more pragmatic and much more objective analysis of the framework, 
 allow the mapping of the supported security controls & behaviour, and allow the framework's clients to have much more visibility into what's happening on those frameworks.  


Again, I don't want to put ESAPI down.  I think ESAPI is a great project.  I think it's one of the most successful and powerful projects for OWASP and we just need to clarify it a little bit.  In fact, I think the more successful ESAPI is, the more this becomes a problem.  I don't think ESAPI  has blown up yet because ESAPI hasn't reached a wide level of adoption by software developers or commercial applications.  



Remember that one day we will have to deal with the problems of applications built on top of ESAPI that have security vulnerabilities or that are being successfully compromised by malicious hackers.




2nd one (posted on on mailing lists)

My view is that the key to make this work is to create the ESTAPI, which is the Enterprise Security Testing API



This way we would have (for every language):
  • ESAPI Interfaces - which describe the functionality that each security control should have
  • ESTAPI - Unit Tests that check the behaviour of the security controls
  • ESAPI Reference Implementation(s) - Which are (wherever possible) 'production ready' versions of those security controls (and  in most cases a one-to-one mapping to the ESAPI Interfaces)
  • Framework XYZ ESAPI 'connectors' - Which wrap (or expose) the security controls defined in the ESAPI Interfaces in Framework XYZ
What I really like about this world, is that we (Application Security Consultants) we start to create standards for how Security Controls should behave. and (as important) are able to work with the Framework developers without they felling that ESAPI is a 'competitor' to they Framework. After all, the way we will really change the market is when the Frameworks used by the majority of developers adopt ESAPI (or its principles)

Of course that the Framework developers are more than welcomed to grab large parts (or even all) of the code provided by the ESAPI reference implementation(s). But the key is that they (the framework developers) must: a) take ownership of the code and b) respect the ESAPI Interfaces.

And hey, if the Framework developers decide NOT to implement a particular security control, that is fine too. 

BUT! 

I would at least expect them to provide detailed information why they made that decision and why they chose NOT to implement or support it (which would allow us (Security community) to respectably agree or disagree with their choices (hey for some Frameworks, being insecure is a feature :) )

Finally, In addition to all the advantages that we will have when frameworks adopt these security controls, there is one that for me is probably the MOST important one: An 'ESAPI compliant app' (which btw is a term we still have to agree what exactly means),is an app that is providing explicit information about where they (the developers) think their (the app) security controls are located.

In another works, via the ESAPI Interfaces (and the ESTAPI tests) the developers are actually telling us (the security consultants): 
  a) what they think their application's attack surface is and  
  b) what is the security behaviour that they have already tested for

Of course that they can game the system, which is why we (Security Consultants) will still be needed (we will also need to make sure that they implemented the security controls properly). But compare that to today's (2009) world, were we are lucky to get an up-to-date application diagram and a reasonable accurate description of how the application was actually coded and behaves. 

This would also (finally) give the application security tools (white, black, glass, gray, pink, blue) a fighting change to automatically, or operator-driven, understand what is going on and report back: 
  - what it knows (security vulnerabilities) and (as important) 
  - what it doesn't know / understand
(ok there is a lot more that these tools will provide us (for example ESTAPI tests) but that is a topic for another post)

So, for me, the key added value of the ESAPI Interfaces, is that it will provide us (Security Consultants) a way to understand how the app works (from a security point of view) and to be able to finally be able to give the clients what they want: Visibility, Assurance and the ability to make 'knowledgeable Risk-based decisions'.

Monday, 11 January 2010

On Comments on Static Tools thread and Frameworks

Here are some comments on the comments made on the .. thread:

@Andrew: you touched on a very important point which is the importance of the 'operator' (i.e. the knowledgeable user). As per the points you make, I really think that we need to take into account its impact.

Sunday, 10 January 2010

Update #4 on OunceLabs/IBM Relationship

Following the previous posts on this topic (see Update on O2 & Ounce & IBMUpdate #2 on O2 & IBM - 02 Sep 09 , Update #3 on O2 & IBM ) here is an update of how the first chapter was concluded.

After some internal debate, IBM decided that the time was not right for them to provide commercial support for O2, so instead of waiting around in IBM land, I made the decision to not accept the contract that I was offered (see why I said NO to IBM for now), which had the practical consequence that my contract with IBM ended on December 31 2009.

The Need for Standards to evaluate Static Analysis tools

In Jan 2010, on the security static analysis space (also called SAST for Static Application Security Testing (you can download the Gartner's Magic Quadrant report from Fortify's website)) there are a number of security focused commercial products (and services) for analyzing an application's source code (or binaries):  Fortify SCA, IBM with Source Edition (was OunceLabs) and Developer Edition, Armorize CodeSecure, CodeScan, Vericode Security ReviewMicrosoft's CAT.NET, Coverity Static Analysis, Klocwork TruePath, Parasoft Application Security Solution and Art-of-Defence HyperSource (I didn't include any Open Source tool because I am not aware of any (actively used) that is able to perform security focused taint-flow analysis)

Recommending ESAPI?


(I just posted this on the SC-L mailing list and ESAPI users list)

Following the recent thread on Java 6 security and ESAPI, I just would like to ask the following clarifications: 

1) For an existing web application currently using a MVC framework (like Spring or Struts) are we today (9th Jan 2009) officially recommending that this web application development team adds OWASP's ESAPI.jar to the list of 'external' APIs (i.e. libs) they use, support and maintain?

2) When adopting the OWASP ESAPI's J2EE implementation, is ESAPI.jar ALL they need to add? or are there other dependencies (i.e. jars) that also need to be added, supported and maintained? (for example on the 'Dependencies' section of the ESAPI Java EE page (i.e. Tab) it seems to imply that there are other *.jars needed)

3) Where can I find detailed information about each of the 9 Security Controls that ESAPI.jar currently supports: 1) Authentication, 2) Access control, 3) Input validation, 4) Output encoding/escaping, 5) Cryptography, 6) Error handling and logging, 7) Communication security, 8) HTTP security and 9) Security configuration? (I took this list of controls from the Introduction to ESAPI pdf)

4) When adopting EASPI.jar, are we recommending that the developers should adopt or retrofit their existing code on the areas affected by those 9 Security Controls? (i.e. code related to: Authentication, Access control, Input validation, Output encoding/escaping, Cryptography, Error handling and logging, Communication security, HTTP security and Security configuration) 

5) Should we recommend the adoption of ALL 9 Security Controls? or are there some controls that are not ready today (9 Jan 2009) for production environments and should not be recommended? (for example is the 'Authentication' control as mature as the 'Error handling and logging' control?)

6) Are there commercial (i.e. paid) support services available for the companies who want to add ESAPI.jar to they application?

7) What is the version of ESAPI.jar that we should recommend? the version 1.4 (which looks like a stable release) or the version 2.0 rc4 (which looks like it is a Release Candidate)

8) Where can I find the documentation of where and how ESAPI should be used? More importantly, where can I find the information of how it CAN NOT or SHOULD NOT be used (i.e. the cases where even when the EASPI.jar are used, the application is still vulnerable)

9) if there list of companies that have currently added ESAPI.jar to their applications and have deployed it? (i.e. real world usage of EASPI)

10) Has the recommended ESAPI.jar (1.4 or 2.0 rc4) been through a security review? and if so where can I read its report?

11) when Jim says "... you can build a new secure app without an ESAPI. But libs like OWASP ESAPI will get you there faster and cheaper....",  do we have peer-reviewed data that suports this claim? 

12) Is there a roadmap or how-to for companies that wish to adopt ESAPI.jar on an a) new application or b) existing real-world application'?

13) What about the current implementations of ESAPI for the other languages. Are we also recommending their use?

14) If a development team decides to use (for example) Spring and ESAPI together in their (new or existing) application, what are the recommended 'parts' from each of those APIs (Spring and EASPI) that the developers should be using? (for example: a) use Encoding from ESAPI, b) use Authentication from Spring, c) use Authorization from ESAPI, d) use Error Handling from Spring, e) use Logging from ESAPI, etc...)

Thanks

Saturday, 9 January 2010

Every API is (at some level) vulnerable

When doing source code analysis, one of the things that we tend to spend a lot of time talking about is if a particular API (i.e. function) is vulnerable or not (note that (whenever possible) I am a big believer of having working exploits for each unique "vulnerability pattern").  


But, when you look at the code, the reality is that as soon as a function has a capability (i.e. it is able to do something), most likely that function is going to be vulnerable to a particular type of attack or exploit. 


In most cases these vulnerabilities will be considered 'exploitable' when the function/application allows an remote attacker to do things that he is not supposed to do; manipulating HTML data or SQL queries, changing the behavior of the application, accessing non-authorized areas.


I quite like the picture where we visialize the application as a series of functions that have vulnerabilities on them and the exercise is to see if any of those functions connect to the outside world.  


A good example is a data-layer function that receives as string an SQL statement to execute (which btw most of the dotnet API's allow!). Note how that function is vulnerable by design to SQL injection! But the question is, can the attacker put a payload on it?  


I think that one of the good exercises to carry out, is to find out where the 'layers' of vulnerability are and start mapping them upwards (i.e via the functions that consume it), until you identify (or not) the problems.  


Even when you can't identify 'exploitable' problems, you probably will be able to identify when they were very close, or cases where they were one step away from creating a vulnerability.


Of course, that depending on the language (and C++ can be more problematic than .Net), you might want to raise those 'not really exploitable today' issues with a 'to fix asap' rating.  (you should also map systemic problems versus one-off problems).


From a secure design point of view, ideally, you want to see  APIs built in a way that they don't expose vulnerabilities to the outside world.  These API will wrap their internal vulnerabilities in a way that they actually are not vulnerable (i.e. even if the data consumed is maliciously controlled, it is not possible to exploit them)


For example, this is what (since .NET 1.4) Microsoft does with Code Access Security (CAS). They treat the CAS world (i.e. the partial trust world boundaries) as the attack surface.

Friday, 8 January 2010

Using Elance.com to post O2 related paid tasks

I've just created an account with www.elance.com (a freelance buyer/seller exchange) and posted a couple O2 related jobs:
If you have some cycles or know somebody who has, please point them to those pages

Dinis Cruz

Thursday, 7 January 2010

OWASP books on Amazon (can you help?)

I just noticed today that in addition to being on Lulu (English (& other language) books and Spanish books) OWASP books are now available on Amazon :) 
These are also there but they are quite a bit out of date:
Humm, there is a number of books that really shouldn't be there, we actually need to get a better grip on this books.

There is a lot of potential here, and it would be great to expose more people to these books. I also would like to see more books created from OWASP materials and for presentations done at OWASP Conferences and Chapters

We need more help in managing these books and the publishing process, anybody wants to help? (here is the OWASP books page at the OWASP WIKI)

Dinis Cruz

OWASP December Membership Numbers

From Alison email to Owasp Board mailing list:

Total Number of Individual Memberships: 767
  • New Memberships in December: 26
  • Renewals in December: 0
  • Lost memberships in December (did not renew): 9
  • December Income from Individual  Memberships: $1300
  • Portion allocated to local chapters: $400
  • December Profit from Individual Memberships: $900

Total Number of Organization Memberships: 27

  • New Memberships in December: 0
  • Renewals in December: 1 (Nokia)
  • Lost memberships in December (did not renew): 1  ({name removed})
  • December Income from Corporate Memberships: $5,000
  • Portion allocated to local chapters: $0
  • December Profit from Corporate Memberships: $5,000

Total Profit for December: $5,900

Wednesday, 6 January 2010

[To Code]: Add Etherpad capability to allow realtime O2 XRules Support

Etherpad has an amazing real-time editing and history-viewing interface that would be really handy to add to O2 since it would allow for real time support to O2 XRules.

The idea would be for two (or more) XRules users to swap the EtherPad address and be able to compile and execute an XRule (or UnitTests) locally with code from an remote EtherPad pad

For example:

The only missing piece is to add to O2's XRules the capability to go to that page, grab the source code, compile it and execute it!  (Shouldn't be that hard since all the required building blocks are already in O2)


This also has quite a lot of security implications, since we don't want to execute malicious code :)

Ideally what we need is:

  • Trusted host for etherpads
  • Download the code via SSL (or other encrypted method (maybe using PKI or shared Keys)
  • Compile code
  • Run a scan on the code before execution
  • Execute code under a .NET Sandbox

References:

Monday, 4 January 2010

Why does Blogger/Google sucks so badly at Managing HTML Fonts?

(rant warning)
If you see multiple fonts in http://diniscruz.blogspot.com/2010/01/idea-why-doesnt-paypal-or-others-also.html it is because Blogger (and Google) took a leaf from Microsoft's book and (if they are developing anything at all) is spending too much time on 'extra features' and not allocating enough resources in solving problems that happen with features that its users (i.e. me) use ALL the time: Like making sure there is ONLY ONE FONT in the entire post!!!!!


Also, I wonder why they don't detect that one of their users (i.e. me) is not happy with their service, is losing its patience and (if an working alternative is found) will jump ship in a blink!

Can't Google use its 'Search Automation Algorithms' to find this type of "...we're about to lose an active user..." scenarios, and then do something about it?

(I'm having these problems using Mac OSx with Safari, Chrome and Firefox)

Idea: Why doesn't PayPal (or others) also manage my PII like Email, Address, Phone?

Following a couple Tweets I posted earlier today:
  • Why doesn't #paypal also manage my personal information like my address (I prefer to pay using paypal so that I don't use my CC everywhere)
  • ...so why don't PP provide a service where ONLY PayPal knows my address (so that I don't have to give it to EVERYBODY I trade)
  • ...you could have some interesting variations of this service where in some cases the goods would have to be shipped via PayPal
  • ... In fact if #PayPal doesn't do it and if #FedEx or #UPS or#Amazon do it, I will start using them instead of #PayPay
I had this question sent via email:


"Typically people buying things online do release  their postal address to the merchant/seller, since they need tocan ship goods.  Sellers/merchants don’t always need to have your address of course, but I don’t believe you  exclude it when dealing with a merchant, unless you just do a “send money” transaction to them, in which case they do not get your postal address."

to which I replied:


"...Sure and isn't just "send money" what PayPay currently does?




It just hit me when I was buying a book from an independent publisher (which I will only use once) that I had no option but to give my full personal details (name, address, phone number, etc..). 


I was happy that I could use PayPal (since that way I don't have to give them my CC details) but was not happy that I could not use PayPal (or other) to protect my other PII (Personal Identifiable Information).


See the only reason I use PayPal or Amazon is because I have more trust in them (in fact, in this case I checked if Amazon had the book (which it didn't))


Although the PII in this case it not mission critical, it is a good case study for the types of 'Security related data management services' that we will need on the Internet in order to allow users to have trust in buying goods online (and even more as we move into the cloud). It will get much more complicated when we need to also have similar services/controls for Health or Financial records.


From my point of view I want to have one (or just a couple) web brokers, who are able to manage (probably even better than me) my sensitive data and make money by charging (me or my bank) a little bit for their service (just like Amazon does with their postage charges). Can't you see how a bank would prefer this? They (the bank) could say "OK... for web transactions we will not send you CC or its details, but will instead send them to PayPal/Amazon/FedEx/Google and you can access it (i.e. buy using it) via your Phone or the Web"


This is why I also Twitted that this service could be provided by FedEx or UPS since they are able to provide this "buy book from publisher XYZ" service without the seller having any idea of who the buyer is (note that these companies have already 'sorted out' the delivery mechanism)


Does this make sense?..."

What I did in 2009

2009 was the year that I: 
I'm sure I did more stuff, but can't really remember it now :)

Focus on MOSS (SharePoint) Security

(this was posted today to the OWASP O2 Platform mailing list and the OWASP-DotNet Project mailing list
------

Now that the IBM contract has ended, I'm starting this January focused on MOSS (Sharepoint) which is part of a project that I have been working on for a while and that finally I can start publishing my techniques and (some) of my findings.


I think that there are a couple guys here (on O2 or DotNet's mailing lists) that are either currently involved in a Sharepoint related engagement or have done it in the past. For them (and others interested in this topic) please lets collaborate on this one and help to create MOSS Security Center of Excellency here at OWASP :)

There was a MOSS thread a while back that proposed the creation of an OWASP WIKI page to store this research. The link was to 
http://www.owasp.org/index.php/Research_for_Sharepoint but there was no content in there (Mark is there another page?) so I've started populating this Research_for_Sharepoint page with the following topics:



  • 1 Resources

    • 1.1 Microsoft resources
    • 1.2 Other Resources and Documentation
    • 1.3 Presentations
    • 1.4 Other interesting resources
    • 1.5 Other Blogs and Articles
    • 1.6 Security related technical articles
  • 2 Published Security issues

    • 2.1 SharePoint related vulnerabilities and its status
  • 3 MOSS Security related WebParts, Tools & services

    • 3.1 Open Source
    • 3.2 Commercially Supported
  • 4 Dangerous MOSS APIs
  • 5 WebParts Security
This is far from complete and I still have quite a lot of research notes I want to publish (please add the ones you know). Although all topics are now on this page, I expect (as the content grows) this to be split into Multiple MOSS related pages.

I also have a number of MOSS O2 related tools and scripts that I will be publishing very soon :)

O2 on Mono, MonoDevelop and OSx

Hi, I need a bit of help with this one.

After some minor code changes, I was able to load up the "O2 - All Active Projects" solution file inside OSx using MonoDevelop and Mono 1.2.6.

To replicate this should be just a case of doing a SVN checkout to http://o2platform.googlecode.com/svn/trunk/  (or http://o2platform.googlecode.com/svn/trunk/O2%20-%20All%20Active%20Projects/ directly) and compiling the "O2 - All Active Projects" solution file (you can also open up the individual "O2 Tool - xxx" solution files.

The command line modules work OK, but the problem seems to be on the WeifenLuo.WinFormsUI module (which is the one that recreates the VS-like windows docking environment), namely its PInvokes.

When running any one of the O2 Tool modules (either from MonoDevelop, or from the command line using "mono _FindingsViewer\ %28O2\ Tool%29.exe") I get the error: "Unhandled Exception: System.DllNotFoundException: user32.dll"

I google this a bit and there seems to be some hacks around this (one of them seems to involve using WINE). 

Is there a way to solve this, or do I need to see how hard it will be to remove the PInvokes from the WeifenLuo.WinFormsUI dll or find another GUI host environment for O2?