Thursday, 15 December 2016

The Authentication micro-service cache incident

A good example of why we need tests across the board, not just normal unit tests, but integration tests, and tests that are spawned as wide as possible, is the story of a authentication module that was developed as an re-factoring into a separate micro-service.

When the module was developed, it contained a high degree of code coverage, in fact it had 100% unit test coverage. The problems arose when it went live, and several issues occurred. One of the original issues occurred because the new system was designed to improve the way the database or the passwords were stored. This meant that once it was fully deployed some of existing dependent services stopped working.

Risk Dashboards and emails

It is critical that you create a suite of management dashboards that map the existing security metrics and the status of RISK tickets:

Jira Dashboard

Why GitHub and JIRA

My current experience is that only GitHub and JIRA have the workflows and the speed that allow these risk workflows to be used properly in the real world.

I know there are other tools available that try to map this and create some UIs for risk workflows, but I believe that you need something very close to the way developers work. GitHub and JIRA meet this essential requirement, as they are both connected to the source code.

Wednesday, 14 December 2016

Linking source code to Risks

If you add links to risk as source code comments, you deploy a powerful and very useful technique with many benefits.

When you add links to the root cause location, and all the places where the risk exists, you make the risk visible. This reinforces the concept of cost (i.e. pollution) when insecure, or poor quality, code is written. Linking the source code to risk becomes a positive model when fixes delete the comments. When the comments are removed, the AppSec team is alerted to the need for a security review. Finally, tools can be built that will scan for these comments and provide a 'risk pollution' indicator.


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Employ Graduates to Manage JIRA

One of the challenges of the JIRA RISK workflow is managing the open issues. This can be a considerable amount of work, especially when there are 200 or more issues to deal with.

In large organizations, the number of risks opened and managed should be above 500, which is not a large quantity. In fact, visibility into existing risks starts to increase, and improve, when there are more than 500 open issues.

The solution to the challenge of managing issues isn't to have fewer issues.

Can't do Security Analysis when doing Code Review

One lesson I have learned is that the mindset and the focus that you have when you do security reviews are very different than when you work on normal feature and code analysis.

This is very important because as you accelerate in the DevOps world, it means that you start to ship code much faster, which in turn means that code hits production much faster. Logically, this means that vulnerabilities also hit production much faster.

Threat Model Confirms Pentest

A key objective of pentest should be to validate the threat model. Pentests should confirm whether the expectations and the logic defined in the threat model are true. Any variation identified is itself an important finding because it means there is a gap in the company's understanding of how the application behaves.

There are three important steps to follow:

  1. Take the threat models per feature, per layer and confirm that there is no blind spots or variations on the expectation
  2. Check the code path to improve the understanding of the code path and what is happening in the threat model
  3. Confirm that there are no extra behaviours



(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)



Tuesday, 13 December 2016

Threat Model per Feature

Creating and following a threat model for a feature is a great way to understand a threat model journey.

First, take a very specific path, a very specific new feature that you are adding, or take a property, such as a new field, or a new functionality.

Using Git as a Backup Strategy

When you code, you inevitably go on different tangents. Git allows you to keep track of all those tangents, and it allows you to record and save your progress.

In the past, we used to code for long periods of time and commit everything at the end. The problem with this approach is that sometimes you follow a path to which you might want to return, or you might follow a path that turns out to be inefficient. If you commit both early and often, you can keep track of all such changes. This is a more efficient way of programming.

Feedback Loops

The key to DevOps is feedback loops. The most effective and powerful DevOps environments are environments where feedback loops, monitoring, and visualizations are not second-class citizens. The faster you release, the more you need to understand what is happening.

DevOps is not a silver bullet, and in fact anyone saying so is not to be trusted. DevOps are a profound transformation of how you build and develop software.

DevOps are all about small interactions, small development cycles, and making sure that you never make big changes at any given moment. The feedback loop is crucial to this because it enhances your understanding and allows you to react to situations very quickly.

Good Managers Are Not The Solution

When we talk about risk, workflows, business owners making decisions about development, and QA teams that don't write tests, we often hear the comment, "If we had good managers, we wouldn't have this problem".

That statement implies that if you had good managers, you wouldn't have the problem, because good managers would solve the problem. That is the wrong approach to the statement. Rather, if you had good managers, you wouldn't have the problem, because good managers would ask the right questions before the problem even developed.

Monday, 12 December 2016

Horizontal DevOps

The best model I have seen for integrating DevOps in a company is where teams are created that span multiple groups. Instead of having a top-down approach to the deployment of operations, where you create the central teams, the standards, the bills, etc., and then push it down, the central DevOps team hires or trains DevOps engineers and then allocates them to each team.

The logic is that each team spends a certain amount of time with a DevOp engineer, who trains the teams in DevOp activities and best practices, and thereby embeds the best practices in the development life cycle.

Is the Decision Hyperlinked?

I regularly hear the following statements: "The manager knows about it", "I told you about this in the meeting", "Everyone is aware of this", and so on. However, if a decision is not in a hyperlinkable location, then the decision doesn't exist. It is vital that you capture decisions, because without a very clear track of them, you cannot learn from experience.

Capturing decisions is most important for the longer term. When you deal with your second and third situations, you start building the credibility to say, "We did this in the past, it didn't work then, and here are the consequences. Here is the real cost of that particular decision, so let's not repeat this mistake".

Involve Security Champions in Decision-making

Once a program starts being placed, security champions will often give feedback that they are not involved in the workflows and decisions. The job of the security champion is to ask, "What is this? Do I trust this? What happens with this?", but they often don't get the opportunity to ask these questions, because decisions are made without their input.

To illustrate this problem, a situation occurred recently where the security champion started to create threat models across a product, and thereby managed to retroactively involve himself in some of the decision-making.

Risk Workflow for Software Vendors

A software vendor is someone who delivers a software application that is executed by a client. The same concept applies to web applications and web apps, but let's start by looking at a traditional software package.

The risk workflow in this case is very important, and there are multiple angles to consider. Let's start with a simple one.

The first items to consider are the issues that evolve during the development of the software. Already, two types of risks exist. There are the risks that exist on the application, which should be known and captured on the risk register. The business owner must accept these risks, because ultimately he/she must decide how to prioritize the risks, and whether to fix them or not, depending on the priorities of the business.

Sunday, 11 December 2016

The Pollution Analogy


When talking about risks, I prefer to use an pollution analogy rather than technical debt. The idea is that we measure the unintended negative consequences of creating something, which in essence is pollution.

In the past, pollution was seen an acceptable side effect of the industrial revolution. For a long time, pollution wasn't seen as a problem in the same way that we don't see security vulnerability as a problem today. We still don't understand that gaping holes in our infrastructure, or in our code, are a massive problem for current and future generations.

We are still in the infancy of software security, where we are in the 1950s in terms of pollution. David Rice gave a great presentation[^david-rice-pollution] where he talks about the history of pollution and how it maps perfectly with InfoSec and AppSec.

Published "SecDevOps Risk Workflow" book (v0.66)

Here is the text I just sent to the current 215 readers of my SecDevOps Risk Workflow book


Hi, here is v0.66

The reason you have not seen an update for a month is because I focused my writting time on the 'Hacking Portugal' book which you can get from Amazon (https://www.amazon.co.uk/Hacking-Portugal-Making-Software-Development/dp/1540743632) or Leanpub (https://leanpub.com/hacking-portugal)

That book is an expanded version of the keynote presentation I delivered at BSidesLisbon (see http://blog.diniscruz.com/2016/11/presentation-hacking-portugal-and.html) and it is my first book published on Amazon :)

Saturday, 3 December 2016

Please help to set the date for the next OWASP DevSecCon Summit. Great description of OWASP Summits

Hi, we are in the final stages of choosing the date for the next OWASP Summit and it would be great if you chipped in with your preference.

Please use the http://doodle.com/poll/e8d4p955rc8guuru doodle and join the other 44 participants.

The OWASP Summit is starting to shape up quite nicely with already a number of good workshops ideas in the works. Please check them out at https://github.com/OWASP/owasp-devseccon-summit/tree/master/Workshops and help to make them better:
  • what topic is missing?
  • who should be at those workshops?
  • what should the participants focus on?
  • what should the be objectives/outcomes?
If you have not been to an OWASP Summit before (i.e the 2008 and 2011 editions) please see below a great description of what they are (from an email sent by Abraham Kang on 6 Apr 2012).

Thanks for your help

Dinis, Seba & Francois

----------------------------------------

Although, I agree with Jim in spirit.  

I have to admit that I was able to get things accomplished at the 2011 Summit that would have taken longer had I not attended the Summit.

I was kind of Stuck on the DOM based XSS cheat sheet because there were just so many existing ways and new ways of exploiting DOM based XSS.  I was lost in trying to understand the exploiting instead of focusing on the Mitigating. 

The Summit gave me an opportunity to work with some of top guys  ( Jim Manico, Stefano Di Paola, Robert Hansen, Gareth Hazes,  Chris Schmidt,  Mario Heiderich, Eduardo Nava, Achim Hoffman, John Stevens, Arian Evans, Mike Samuel, Jeremy Long, Dinis Cruz, and others please forgive me if I forgot to mention you) in Web security to get their ideas and refine mine.  
I also was able to bring up issues that were affecting adoption by large enterprises of OWASP materials with Jeff Williams and others.

Finally, I was also able to meet the people interested in OWASP Web Development Guide (which I have been trying to reboot but having started a new job have failed to make much progress on) to discuss issues related to the guide and try to address them.

All of this would have been impossible to do without the summit.

I was also hoping to suggest that this year we try to bring other security members of the community that haven't traditionally participated (iSec Partners, Gotham Digital Science, etc.) in OWASP to the summit as I have great respect for those guys and think they could contribute greatly to the success of OWASP.  

The conference is viewed as being private but I thought it was open to anyone interested in contributing to OWASP.  I think people would be willing to pay to attend a conference where they could speak to other leaders in informal meetings on topics of interest and provide the additional benefit of OWASP deliverables.

We are a very disperse group, it helps to get people together to work things out, discuss and see the other people as human beings. I have to admit that the conference was also a lot of fun.  I got to laugh with people I would have never had the chance to before this.  Jokes don't seem to go over as well when they are made over email.  I got to hear stories of (Larry's or Chris's -- the last names have been omitted to protect the Guilty) midget experiences/encounters.  I got to know of other people skeleton's in their closets.  

This allowed all of us to bond in a way that couldn't happen without a conference like this.

Another benefit of these types of interactions is that everyone that attended last summit was involved with an OWASP project (which may be a good requirement).  I met Andras (my German brother) of WS-Attacks.org and although I haven't done a good job of it yet, I was hoping to reboot the OWASP Web Development Guide (I will send another email on that thread to explain my struggles) and see if I could use the content from WS-Attacks.org in the new guide (seeing as I did the translation revision for Andras) for the Web Services chapter.  If I didn't attend the Summit I wouldn't have met him and made this connection.

Yes there were a couple of things that could have been handled better related to the usurping of funds from individual Chapter's accounts and we probably could have spent less money on the incidentals but there is great value in the Summit.

OWASP Rocks!

Warmest Regards,
Abe

Sorry for being so long winded.

Thursday, 1 December 2016

Please review the 'Hacking Portugal' book available on Amazon (paperback and Kindle)

My 'Hacking Portugal' book is now available on Amazon and I would really appreciate your feedback and ideally an book review :)

Here is the Amazon page: https://www.amazon.co.uk/Hacking-Portugal-Making-Software-Development/dp/1540743632

You can download the PDF for free at LeanPub or from GitHub


This is my first book published at Amazon, and I have to admit that I'm quite proud of it :)

This book is based on the "Hacking Portugal and making it a global player in Software development" presentation I delivered at that BSidesLisbon and C-Days conferences (November 2016). All content is released under an Creative Commons licence at the Book_Hacking_Portugal GitHub repo

Tuesday, 29 November 2016

Published 'Hacking Portugal' Book

I just published the 'Hacking Portugal' book which is based on the "Hacking Portugal and making it a global player in Software development"  presentation I delivered at BSidesLisbon in November 2016.

You can get it from amazon


or at  https://leanpub.com/hacking-portugal/


Friday, 18 November 2016

Presentation: Veracode Automation CLI (using Jenkins for SDL integration)

Here is a presentation about an secure CI workflow that I'm working on.

The key parts are the Veracode CLI developed (see veracode-api) and the couple Jenkins projects which use the Veracode engine in a 'concurrent scanner' model.

Let me know what you think of it:

Friday, 11 November 2016

Presentation "Hacking Portugal and making it a global player in Software development"

UPDATE: See Hacking Portugal book for an expanded and updated version of these ideas (available from Amazon)


Here is the presentation I delivered today at BSidesLisbon

There is an extended version of these ideas on this GitHub repo which you can read online at: https://diniscruz.github.io/keynote-bsideslisbon/

Description: As technology and software becomes more and more important to Portuguese society it is time to take it seriously and really become a player in that world. Application Security can act as an enabler, due to its focus on how code/apps actually work, and its enormous drive on secure-coding, testing, dev-ops and quality. The same way that Portuguese navigators once looked at the unknown sea and conquered it, our new digital navigators must do the same with code. This presentation will provide a number of paths for making Portugal a place where programming, TDD, Open Source, learning how to code, hacking (aka bug bounty style) and DevOps are first class citizens.

Monday, 7 November 2016

Relationship with existing standards

It is important have a good understanding of how a company's Risk profile maps to existing security standards alike PCI DSS, HIPAA, and others.

Most companies will fail these standards when their existing 'real' RISKs are correctly identified and mapped. This explains the difference between being 'compliant' and being 'secure'.

Increasingly, external regulatory bodies and laws require some level of proof that companies are implementing security controls.

I don't know the security status of a website

Lack of data should affect decision-making about application security.

Recently, I looked at a very interesting company that provides VISA compatible debit-card for kids, which allows kids to get a card whose budget can be controlled online by their parents. There is even a way to invest in the company online via a crowdfunding scheme.

I looked at this company as a knowledgeable person, able to process security information and highly technical information about the application security of any web service. But I was not able to make any informed security decision about whether this service is safe for my kids. I couldn't understand the company's level of security because they don't have to publish it and, therefore, I don’t have access to that data.

Sunday, 6 November 2016

Published "SecDevOps Risk Workflow" book (v0.65)

I just published version v0.65 of the SecDevOps Risk Workflow book.

You can get the book (for free) at https://leanpub.com/secdevops (when you become a reader you will get email alerts with every release)

The diff for this version (with v0.63) shows 115 commits, 59 changed files, 545 additions and 355 deletions.

Creating better briefs

Developers should use the JIRA workflow to get better briefs and project plans from management. Threat Models are also a key part of this strategy.

Developers seldom find the time to fulfil the non-functional requirements of their briefs. The JIRA workflow gives developers this time.

The JIRA workflow can help developers to solve many problems they have in their own development workflow (and SDL).


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Cloud Security

One way in which cloud security differs from previous generations of security efforts, such as software security and website security, is that in the past, both software and website security were almost business disablers. The more features and the more security people added, the less attractive the product became. There are very few applications and websites that make the client need more security to do more business, which results in the best return on investment.

What’s interesting about cloud security is that it might be one of the occasions where security is a business requirement, because a lack of security would slow down the adoption rate and prevent people from moving into the cloud. Accordingly, people care about cloud security, and they invest in it.

Feedback loops are key

A common error occurs when the root cause of newly discovered issues or exploits receives insufficient energy and attention from the right people.

Initially, operational monitoring or incident response teams identify new incidents. They send the incidents are to the security department, and after some analysis the development teams receive them as tickets. The development teams receive no information about the original incident, and are therefore unable to frame the request in the right perspective. This can lead to suboptimal fixes with undesired side effects.

Saturday, 5 November 2016

Understand Every Project's Risks

It is essential that every developer and manager know what risk game they are playing. To fully know the risks, you must learn the answers to the following questions:
  • what is the worst-case scenario for the application?
  • what are you defending, and from whom?
  • what is your incident response plan?
Always take advantage of cases when you are not under attack, and you have some time to address these issues.


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Using logs to detect risks exploitation

Are your logs and dashboards good enough to tell you what is going on? You should know when new and existing vulnerabilities are discovered or exploited. However, this is a difficult problem that requires serious resources and technology.

It is crucial that you can at least detect known risks without difficulty. Being able to detect known risks is one reason to create a suite of tests that can run against live servers. Not only will those tests confirm the status of those issues across the multiple environment, they will provide the NOC (Network Operations Centre) with case studies of what they should be detecting.

Capture knowledge when developers look at code

It is vital that when a developer is looking at code, he can create tickets for 'things noticed' without difficulty. For example, 'things noticed' include methods that need refactoring, complex logic, weird code, hard-to-visualize architecture, etc. If this knowledge is not captured, it will be lost.

The developer who notices an issue, and opens a ticket for the issue, will be unable to do anything about it at that moment in time, since he will already be focused on resolving another bug.

Friday, 4 November 2016

Describe Risks as Features rather than as Wishes

When opening a risk JIRA ticket, it is essential to describe the exact behavior of that issue as a feature, rather than describing what you would like to see happening (i.e. your wish list).

For example:
  • instead of saying 'application should encode XYZ value', you should say 'XYZ value is not encoded'
  • instead of saying 'application shouldn't be vulnerable to XSS or SQL injection', you should say 'application is vulnerable to SQL injection'. In this case, SQL Injection is a feature of the application, and while the application allows SQL Injection, the application is working as designed (whether that is intended or not is a different story).

When we describe vulnerabilities, we describe features, because vulnerability is a feature of an application.

The smaller the ticket scope the better

For bugs and tasks, the smaller the bug the better.

Having many small bugs and issues can be an advantage for the following reasons:

  • easier to code
  • easier to delegate (between developers)
  • easier to outsource
  • easier to test
  • easier to roll back
  • easier to merge into upstream or legacy branches
  • easier to deploy

It is better to put them in a special JIRA project(s) which can be focused on quality or non-functional requirements.

Thursday, 3 November 2016

Collaboration Technologies

The following technologies are crucial for Security Champions and JIRA workflows to work efficiently:

Conference for Security Champions

Every 6 to 12 months, it is a good idea to hold a conference exclusively dedicated to security champions, particularly for companies that have multiple locations, where its security champions don't meet regularly in person.

At the conference, external speakers should present on specific topics.

If there are already several external AppSec consulting companies under contract to the hosting company, the consultants involved in existing projects are perfect candidates to present to the conference. They can use their own examples and stories, and it is easier to present internal materials if all participants are signed-up to the same NDA (Non-Disclosure Agreement).

Wednesday, 2 November 2016

Create an Technology Advisory Board

One of the biggest challenges in Agile and DevOps environments is the adoption rate of new technologies.

To be as agile as possible, there is a tendency to adopt new technology whenever it appears to have an advantage. Common examples are cloud technology, analytic tools, continuous integration tools, container technology, web platforms and frameworks, and client-side frameworks.

Inaction is a risk

Lacking the time to perform 'root cause analysis', or not understanding what caused a problem, are valid risks in themselves.

It is key that these risk are accepted

This is what makes them 'real', and what will motivate the business owner to allocate resources in the future. Specially when a similar problem occur.



(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Tuesday, 1 November 2016

Risk accepting threat model

If you have trouble getting developer teams to create threat models, or to spend time on those threat models, then the solution is to make them accept the risk incurred from not having a threat model for the application.

The idea is not to be confrontational. Instead, stating that a feature has no threat model is a very pragmatic, focused, and objective statement.

The idea is that the developer team must accept that they don't have a threat model. The logic is to create a ticket that says there is no threat model, and the ticket will be closed when the threat model is created. Alternatively, if the developers and their management team don't want to spend the time creating a threat model, they must accept the risk of not having one.

This can be difficult to accept, but it's an important part of the exercise.



(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)

How to review Applications as a Security Champion

When you review applications as a security champion, you need to start by looking at the application from the point of view of an attacker. In the beginning, this is the best way to learn.

You should start thinking about data inputs, about everything that goes into the database, the application, all the entry points of the application. In short, think about everything an attacker could control, which could be anything from headers, to cookies, to sockets, to anything that enters the application.

Authorization is also a great way to look at the application. Just looking at how you handle data, and how you authorise things, is a great way to understand how the application works.


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Monday, 31 October 2016

If you don't have a Security Champion, get a mug

If your developer team doesn't have an assigned security team champion, get one of these mugs.

That 'Security Expert' mug represents the fact that, without a securit champion, when a developer has an application security question, he might as well ask the dude on that mug for help.

I also like the fact that the mug reinforces the idea that for most developer teams, just having somebody assigned to application security is already a massive step forward!!

Basically, we have such a skill shortage in our industry for application security developers that 'if you have a heart-beat you qualify'

What it takes to be a Security Champion

To become a security champion, it is essential that you want to be one.

You need a mandate from the business that will give you at least half a day, if not one full day per week, to learn the role. The business should also provide the means to educate and train you and others who wish to become security champions. Increasing and spreading knowledge will increase awareness and control.

You need to be a programmer, and understand code, because your job is to start looking at your application and understand its security properties. You should also know 'the tools of the trade', and how to implement them, in the most efficient way. Lastly, you must be able to identify useful metrics and instruct on how to obtain them.


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Sunday, 30 October 2016

If you have a heartbeat, you qualify!

It is important to understand that AppSec skills are not a key requirement to become a security champion. The essential quality is to want to become one.

I can make a good developer, who is interested and dedicated, into a good AppSec specialist in 6 months. If the developer is an expert in AppSec, then he should join the central AppSec team.


(from SecDevOps Risk Workflow book, please provide feedback as an GitHub issue)


Published "SecDevOps Risk Workflow" book (v0.63)

I just published version v0.63 of the SecDevOps Risk Workflow book.

You can get the book (for free) at https://leanpub.com/secdevops (when you become a reader you will get email alerts with every release)

The diff for this version (with v0.60) shows 113 commits, 63 changed files, 667 additions and 185 deletions.

Using Artificial Intelligence for proactive defense

We need AI to understand code and applications. Our code complexity is getting to a level that we need to start to use artificial intelligence capabilities to understand it, and to get a grasp of what is going on, so we can create secure applications that have no unintended side effects.

As AI becomes much more commonplace, we should start to use it more to source code analysis and application analysis. Kevin Kelly has some very interesting analysis on the use of AI, where he discusses the idea that one of the next major revolutions will be where we start adding AI to everything, because the cost of AI will become so low that we will be able to add AI to many devices.

In DevOps Everything is Code

A common gap in DevOps workflows is (ironically) Application Security activities on the code the DevOps team is writing (Secure coding, Static/Dynamic analysis, Threat Models, Security Reviews, Secure Coding Guidelines, Security Champions, Risk Workflows, etc...)

One cause for this gap is the fact that a large number of DevOps teams come from network and infrastructure backgrounds, or network security backgrounds (i.e. traditional InfoSec), rather than from development (i.e. coding).

Saturday, 29 October 2016

Do security reviews every sprint

If you have an agile development environment, you need to implement security procedures and security reviews at the end of every sprint. In the period between the sprint finishing and going live, you need to do a push to get a sense of whether the original threats and issues, that were highlighted in the threat model, were done, or exist, in a verifiable way.

This task shouldn't be done by the central AppSec team.

Why SecDevOps

I like SecDevOps because it reinforces the idea that is an extension of DevOps. SecDevOps points to the objective that eventually we want the Sec part to disappear and leave us with DevOps.

Ultimately, we want an environment where 99% of the time, DevOps don't care about security. They program in an environment where they either cannot create security vulnerabilities, or it is hard to create them, or it is easy to detect security issues when they occur.

This doesn't mean you don't need security teams, or AppSec experts. It doesn't mean you don't need a huge amount of work behind the scenes, and a huge amount of technology to create those environments.

Presentation - "SecDevOps Risk Workflow - v0.6", InfoSecWeek, Oct 2016

Slides from presentation delivered at InfoSecWeek in London (Oct 2016) about making developers more productive, embedding security practices into the SDL and ensuring that security risks are accepted and understood.

The focus is on the Dev part of SecDevOps, and on the challenges of creating Security Champions for all DevOps stages.

This presentation is based on the ideas captured on the SecDevOps Risk Workflow book (that I'm currently writing).

Email to owasp-leaders about SecDevOps Risk Workflow Book

Here is the email I just sent to the OWASP-leaders list

SecDevOps Risk Workflow Book (please help with your feedback)

Hi fellow OWASP leaders and friends, over the past 4 years I made the move from 'breaking apps' into becoming a real Developer, an AppSec Trainer and creating multiple AppSec teams (protecting large companies from real attacks and helping developers to write secure code)

To try to capture my experiences, to help a wider audience and to get some feedback, I've been creating a book on leanpub called SecDevOps Risk Workflow which I would really appreciate if you could check it out.

You can get it for free at https://leanpub.com/secdevops 

Friday, 28 October 2016

Annual Reports should contain a section on InfoSec

Annual reports should include sections on InfoSec and AppSec, which should list their respective activities, and provide very detailed information on what is going on.

Most companies have Intel dashboards of vulnerabilities, which measure and map risk within the company. Companies should publish that data, because only when it is visible can you make the market work and reward companies. Obliging companies to publish security data will make them understand the need to invest, and the consequences of the pollution that happens when you have rented projects with crazy deadlines and inadequate resources, but somehow manage to deliver.

5000% code coverage

A big blind spot in development is the idea that 100% code coverage is 'too much'.

100% or 99% code coverage isn't your summit (i.e. destination), 100% is base camp, the beginning of a journey that will allow you to do all sorts of other tests and analysis.

The logic is that you use code coverage as an analysis tool, and as a way to understand what a particular application, method or code path is doing.

Code coverage allows you to answer code related questions in much greater detail.

Thursday, 27 October 2016

Run Apps Offline

The ability to run applications offline, i.e. without live dependencies of QA servers, or even live servers, is critical in the development process. That capability allows the developers to code at enormous speed, because usually the big delays and expensive calls are to those services that allow all sorts of versioning, and all sorts of development techniques to occur. The ability to run your apps offline also signifies that the application development environment has matured to a level where you now have, or have created, mocked versions of your dependencies.

Ideally, the faster you can run the dependencies, even running them as real code, the better. The important thing is to be sure you are running them locally, without a network connection, and without an umbilical code to another system.

Abusing the concept of RISK

As you read [the SecDevOps] book you will notice liberal references to the concept of RISK, especially when I discuss anything that has security or quality implications.

The reason is I find that RISK is a sufficiently broad concept that can encompass issues of security or quality in a way that makes sense.

I know that there are many, more formal definitions of RISK and all its sub-categories that could be used, but it is most important that in the real world we keep things simple, and avoid a proliferation of unnecessary terms.

Fundamentally, my definition of RISK is based on the concept of 'behaviors' and 'side-effects of code' (whether intentional or not). The key is to map reality and what is possible.

Wednesday, 26 October 2016

Email is not an Official Communication Medium

Emails are conversations, they are not official communication mediums. In companies, there is a huge amount of information and decisions that is only communicated using emails, namely:
  • risks
  • to-dos
  • non-functional requirements
  • re-factoring needs
  • post-mortem analysis
This knowledge tends to only exist on an email thread or in the middle of a document. That is not good enough. Email is mostly noise, and once something goes into an email, it is often very difficult to find it again.

Creating Small Tests

When opening issues focused on quality or security best practices (for example, a security assessment or a code review), it's better to keep them as small as possible. Ideally, these issues are placed on a separate bug-tracking project (like the Security RISK one), since they can cause problems for project managers who like to keep their bug count small.

Since this type of information only exists while AppSec developers are looking at code, if the information isn't captured, it will be lost, either forever, or until that bug or issue becomes active. It is very important that you have a place to put all those small issues, examples, and changes.

Tuesday, 25 October 2016

When Failed Tests are Good

When you make a code change, it is fundamental that every change you make breaks a test, or breaks something. You are testing for that behaviour; you are testing for the particular action that you are changing.

This means you should be happy when you make a change and the test fails, because you can draw confidence from knowing the side effects of the change. If you make a test change in one place and a couple of tests break, and you make another test change in a different place and fifty tests break, you get a far better sense of the impact of the changes that you made.

Security makes you a Better Developer

When you look at the development world from a security angle, you learn very quickly that you need to look deeper than a developer normally does. You need to understand how things occur, how the black magic works, and how things happen 'under the hood'. This makes you a better developer.

Studying in detail allows you to learn in an accelerated way. In a way, your brain does not learn well when it observes behaviour, but not cause. If you are only dealing with behaviour, you don't learn why something is happening, or the root causes of certain choices that were made in the app or the framework.

Monday, 24 October 2016

Chained threat models

When you create threat models per feature or per component, a key element is to start to chain them, i.e. create the connections between them. If you chain them in a sequence you will get a much better understanding of reality. You will be able to identify uber-vulnerabilities, or uber-threats, that are created by paths that exist from threat model, A to threat model B, to threat model C.

For example, I have seen threat models where one will say, "Oh, we get data from that over there. We trust their system, and they are supposed to have DOS protections, and they rate limit their requests".

Code Confidence Index

Most teams don't have confidence in their own code, in the code that they use, in the third parties, or the soup of dependencies that they have on the application. This is a problem, because the less confidence you have in your code, the less likely you are to want to make changes to that code. The more you hesitate to touch it, the slower your changes, your re-factoring, and your securing of the code will be.

To address this issue, we need to find ways to measure the confidence of code, in a kind of Code Confidence Index (COI).

Every Bug is an Opportunity

The power of having a very high degree of code coverage (97%+) is that you have a system where making changes is easy.

The tests are easy to fix, and you don't have an asymmetric code fixing problem, where a small change of code gives you a nightmare of test changes, or vice versa.

Instead, you get a very interesting flow where every bug, every security issue, or every code change is an opportunity to check the validity of your tests. Every time you make a code change, you want the tests to break. In fact you should worry if the tests don't break when you make code changes.

Sunday, 23 October 2016

Developers should be able to fire their managers

Many problems developer teams deal with arise from the inverted power structure of their working environment. The idea persists that the person managing the developers is the one who is ultimately in charge, responsible, and accountable.

That idea is wrong, because sometimes the person best-equipped to make the key technological decisions, and the difficult decisions, is the developer, who works hands-on, writing and reading the code to make sure that everything is correct.

A benefit of the 'Accept Risk' workflow, is that it pushes the responsibility to the ones that really matter. I've seen cases when upper-layers of management realise that they are not the ones that should be accepting that particular risk, since they are not the ones deciding on it. In theses cases usualy the decision comes down to the developers, who should use the opportunity to gain a bigger mandate to make the best decisions for the project.

Sometimes, a perverse situation occurs where the managers are no longer coding. They may have been promoted in the past because they used to be great programmers, or for other reasons, but now they are out of touch with programming and they no longer understand how it works.

Developer Teams Need Budgets

Business needs to trust developer teams.

Business needs to trust that developers want to do their best for their projects, and for their company.

If business doesn't learn to trust its developer teams, problems will emerge, productivity will be affected and quality/security will suffer.

A great way to show trust is to give the developer team a budget, and with it the power to spend money on things that will benefit the team.

Published "SecDevOps Risk Workflow" book (v0.60)

I just published version v0.60 of the SecDevOps Risk Workflow book.

You can get the book (for free) at https://leanpub.com/secdevops (when you become a reader you will get email alerts with every release)

The diff for this version (with v0.57) shows 138 changed files, 459 additions and 174 deletions.

Hyperlink everything you do

Whether you are a developer or a security person, it is crucial that you link everything you do, to a location where somebody can just click on a link and hit it. Make sure whatever you do is hyperlinkable.

This means that what you create is scalable, and it can be shared and found easily. It forms part of a workflow that is just about impossible, if you don't hyperlink your material.

An email is a black box, a dump of data which is a wasted opportunity because once an email is sent, it is difficult to find the information again. Yes, it is still possible to create a mess when you start to link things, connect things, and generate all sorts of data, but you are playing a better game. You are on a path that makes it much easier in the medium term for somebody to come in, click on the link, and make sure it is improved. It is a much better model.

Saturday, 22 October 2016

Getting Assurance and Trust from Application Security Tests

When you write an application security test, you ask a question. Sometimes the tests you do don't work, but the tests that fail are as important as the tests that succeed. Firstly, they tell you that something isn't there today so you can check it for the future. Secondly, they tell you the coverage of what you do.

These tests must pass, because they confirm that something is impossible. If you do a SQL injection test, in a particular page or field, or if you do an authorization test, and it doesn't work, you must capture that.

If you try something, and a particular vulnerability or exploit isn't working, the fact that it doesn't work is a feature. The fact that it isn't exploitable today is something that you want to keep testing. Your test confirms that you are not vulnerable to that and that is a powerful piece of information.

Sunday, 16 October 2016

Presenting "Surrogate dependencies" at London LSCC on 20th of October (Thursday)

Here is the description of this presentation:
Don't mock internal functions and methods, mock external dependencies. How to do that? This presentation will present a framework and practical example of creating Surrogate dependencies (think custom proxies, similar to WireMock). They are based on data collected from Integration tests to create environments where target applications can be executed offline and be subject to advanced security, quality and performance testing. All data is stored natively (JSON, XML) and Git is used for content versioning and simulation.
You can register at https://skillsmatter.com/meetups/8431-lscc-meetup (here is the meetup page)

This is going to be an variation of the Surrogate dependencies presentations delivered last month at the Owasp London Chapter

Published "SecDevOps Risk Workflow" Book (v0.57)

I just published version v0.57 of the (previously called) Jira Risk Workflow book.

You can get the book (for free) at https://leanpub.com/secdevops (when you become a reader you will get email alerts with every release)

As you probably noticed, there was a significant change in this release. The title of the book has been changed to 'SecDevOps Risk Workflow' (see here for the background story)

I hope you will agree that this change will represent better the direction of the book and the content I've been adding to it.

Saturday, 15 October 2016

AppSec memo from God

Having an Board level mandate is very important since it sends a strong message of AppSec importance.
The best way to provide a mandate to the existing AppSec team is to send a memo to the entire company, providing a vision for AppSec and re-enforcing its board-level visibility.
Sometimes called the 'Memo from God' the most famous one is Bill Gates 'Trustworthy computing' memo from January 2002 (responsible for making Microsoft turn the corner on AppSec)

Example of what it could look like

Here is a variation of a memo that I wrote for a CTO (in a project where I was leading the AppSec efforts) which contains the key points to make. 

Friday, 7 October 2016

Backlog Pit of Despair

In lots of development teams, especially in agile teams, the backlog has become a huge black hole of issues and features that get thrown into it and disappear. It is a mixed bag of things we might want to do in the future, so we store them in the backlog.

The job of the product backlog is to represent all the ideas that anyone in the application food chain has had about the product, the customer, and the sales team. The fact that these ideas are in the backlog means they aren’t priority tasks, but are still important enough that they are captured. Moving something into the backlog in this way, and identifying it as a future task, is a business decision.

Thursday, 6 October 2016

Who is actually making decisions?

One of the interesting situations that occurs when we play the risk acceptance game at large organisations, is how we are able to find out exactly who is making business and technical decisions.
Initially, when a ‘Risk Accepted’ request is made, what tends to happen is that the risk is pushed up the management chain, where each player pushes to their boss to accept risk. After all, what is at stake is who will take responsibility for that risk, and in some cases, who might be fired for it.
Eventually there is a moment when a senior director (or even the CTO) resists accepting the risk and pushes it down. What he is saying at that moment in time, is that the decision to accept that particular risk, has to be made by someone else, who has been delegated (officially or implicitly) that responsibility.

Risk Dashboards and emails

 It is critical that you create a suite of management dashboards that map the existing security metrics and the status of RISK tickets:
  • Open, In Progress
  • Awaiting Risk Acceptance, Risk Accepted
  • Risk Approved, Risk not Approved, Risk Expired
  • Allocated for Fix, Fixing, Test Fix
  • Fixed
Visualising this data makes it real and creates powerful dashboards. These need to be provided to all players in the software development lifecycle.

Email is not an Official Communication Medium

Emails are conversations, they are not official communication mediums. In companies, there is a huge amount of information and decisions that is only communicated using emails, namely:
  • risks,
  • to-dos
  • non-functional requirements
  • re-factoring needs
  • post-mortem analysis
This knowledge tends to only exist on an email thread or in a middle of document. That is not good enough. Email is mostly noise, and once something goes into an email, it is often very difficult to find it again.

"JIRA Risk Workflow" Book , alpha version published at Leanpub

To expand the ideas I presented with Using JIRA to manage RISKS,  I decided to create a smallish book focused on how that workflow works. 

A key objective is to document the workflow better and allow to teams to implement it using their own version of JIRA (I have done multiple presentations where one of the follow up questions is "Ok, we like it, now what")

You can get this book from Leanpub (with option to get it for free) at 


Please take a look and let me know what you think of the structure, font, layout, order, content, voice, idea, etc...

Tuesday, 4 October 2016

Survey about Security Champion program

Once you have a Security Champion (SC) program in place, you need to keep track of its effectiveness.

Here (see below) is a survey created by Vinod Anadan designed to learn from the current SCs and make it better

Any other questions we should be asking?



The AppSec team would like to conduct a survey about the Security Champion program.

Sunday, 2 October 2016

Use AppSec to visualise logs

Once you've got your logs, a typical challenge is how to process, and visualise them.
This usually happens when you try to address visualisation as a whole. But until good filters and multi-stage-analysis are in place, there is just too much data, too much information, and you will be left struggling with the sheer size of the data you are looking at.
The key is to create an environment where we are only querying an subset of the data, with fast queries and REPL like workflow.
One of the ways to manage this is to start with a small AppSec question (ideally codified as a Test).

Is your pentest delivering on AppSec?

Here is how to review a pentest and figure out if it is a network security assessment or an AppSec security review.

When you look at a pen test, you can tell very quickly if it was done by somebody who understands AppSec (somebody who can code), or by somebody who is approaching the problem from an network security point of view (usually running lots of tools).
The first main points to notice are if they asked for the source code, and if they performed an threat model on the target application. If they didn't, then it is most likely going to be a network security assessment.

Who is Paying for AppSec on open code?

When there isn't a commercial company behind an application or library, who is paying for:
  • secure development
  • secure coding standards,
  • threat models,
  • security reviews,
  • dependency management,
  • etc...
One of the interesting questions that arose when we talk about the need for open-source security coding technology, security coding centres, and everything we need to build secure code is: Who pays for it?

No server-site generation of dynamic web content

A very powerful design pattern that can provide a huge amount of security for web applications, is when there is no server-site generation of dynamic web content.
This avoids the pattern of:
  • starting with clean data objects on the server
  • merging code and data on the server
  • sending it over to the client as HTML
  • rebuilding it on the browser site for rendering and execution
The way to make this happen is to make all your web pages and content to be downloaded as static resources. This is done from a locked down server, ideally using git as the deploy mechanism. Data is provided to the UI via AJAX requests from dedicated WebServices.

Make sure your Security Champions are given time

It is very important that security champions are given the time, the focus, the mandate and the information required to do their jobs.

The good news is that now that you have security champions (at least one per team), their work will allow you to see the difference between the multiple teams and the parts of a company who are able to make it work, and those who are struggling make it happen.

Saturday, 1 October 2016

The currency of AppSec is provability and assurance

(from Software Quality book)

When we (AppSec) make a statement about a particular security issue, it always be clear and unambiguous.

We should never say "This might be a problem", or "That might be exploited". When it comes to problems or weaknesses in AppSec, we have to express ourselves with confidence.

InfoSec (and AppSec) lack of respect for users

(from Software Quality book)


InfoSec (Information Security) tends to have a really bad attitude towards end-users of technology and developers, where they (the users) get blamed for doing 'insecure stuff' and causing 'security incidents'.
This is crazy, it is like health and safety officers complaining that people are 'doing things', so it puts them into danger.
The fundamental logic is that security is there to empower users, not to be a tax or dictatorship.

Do you have an AppSec team?

(from Software Quality book)

Let's be clear. If part of your InfoSec team you don't have a team of highly skilled professionals who understand AppSec (Application Security), who can program better than most of your developers, and who will be totally hireable by your dev team, then you don't have an AppSec team.

The Power of Exploits

(from Software Quality book)

If you work for a company that doesn’t have a strong AppSec team, or a company that has not seen powerful (or public) exploitation of their assets, you need to write some exploits.

Never underestimate the power of a good exploit.

A good exploit will dramatically change the business' and developer's perception of what security actually means to their company.

Get 40 days of consulting services to kick start AppSec

(from Software Quality book)

If you don't have a big list of security issues or need exploits, then one option is to hire a security company for forty to sixty days, and let them review your applications across the technology spectrum, and across the platform.

Capture the success stories of your threat models

(from Software Quality book)

One of the key elements of threat modeling is it's ability to highlight a variety of interesting issues and blind spots, in particular within the architecture of the threat model. One of my favorite moments occurs when the developers and the architects working on a threat model realize something that they hadn't noticed before.

Friday, 30 September 2016

Presentation - "Surrogate dependencies (poc in node js) v1.0"

Here is the second part of the presentation I delivered at the OWASP London Chapter event (29 Sep 2016)

Presentation "NodeJS security - still unsafe at most speeds - v1.0"

Here is the first part of the presentation I delivered at the OWASP London Chapter event (29 Sep 2016)

Sunday, 25 September 2016

Threat Model Community

(from Software Quality book)

There is currently (late 2016) space within the application security world to develop a community focused on Threat Modeling. Such community would allow the many parties working on Threat Modeling to share information and provide a voice to all different stakeholders.

Friday, 23 September 2016

The business model of selling a fork

(from Software Security Book)

An open source based business model that I really like, is the idea that the company (or team) behind a particular open source project, sells a fork of the master repository, that is customised and/or maintained for a particular customer.

What that means is the customer buys access to a fork, from the authors of that particular code/repo/project.

That way the company developing the application has a direct connection with the client, and a regular revenue stream.

AppSec should buy tools for developers

(from Software Quality book)

This is a great opportunity to generate goodwill and positive working relationships with developers. If the AppSec team is able to actually find the budget for tools, it will help developers be more productive.

Two great examples are WallabyJS for javascript and NCrunch for .Net

Inside a large organization, you will find teams where for some reason or another, management hasn’t seen as a priority to invest in tools for developers.

Developers need data classification

(from Software Security Book)

Every type of data that exists in an organisation, especially the data that is consumed by applications, needs to have a Data Classification mapping.

Developers need to know if a particular piece of data is sensitive, and what value it holds for the business.

A good way to determine the expected level of confidentiality and integrity, is to ask what would happen 'If a particular set of data were to be fully disclosed?' (for example uploaded to PasteBin) or 'If some of the data was being maliciously modified over a period of months?'.

I Abuse the term ‘Unit Test’

(from Software Security Book)

For me a Unit Test is a test of an 'unit’. The only question is how big is that 'unit’.

If you go to Wikipedia page for List of Unit Testing Frameworks you will see a large list of ‘unit test’ frameworks which range from traditional ‘unit tests’ (on individual function or procedure) all the way to:
  • integration tests,
  • production tests,
  • e2e tests (end-to-end)
  • performance tests
  • smoke tests, etc…
  • (i.e. every-type of automate-able test).
For me, if you can run it with a unit test framework, then it is a unit test.

Putting Data in PasteBin

(from Software Quality book)

One of the best ways to make Developers, Architects and Managers understand confidentiality of data hosted by their application, is to ask the question, 'Can we put all of the data on your database on PasteBin?' [^PasteBin]

That question makes all parties involved really think about what that database contains.

Ideally, the correct answer is yes, there is no problem. All that data could go to pastebin because the data shouldn't mean anything by itself.

Graduates to manage JIRA

(from Software Quality Book)

One of the challenges of the JIRA RISK workflow is managing and maintaining the opened issues. This can be a considerable amount of work, especially when there are 200 or more issues.

Note that, in large organizations, the number of risks opened and managed should be above 500, which is not a lot, and in fact, is the level when visibility into existing risks really starts to happen.

The solution isn't to have less issues.

Describe Risks as Features rather than as Wishes

(from Software Quality Book)

When opening up a risk JIRA ticket, it is key to describe the exact behavior of that issue as a feature, versus how you would like to see happening (i.e your wish list).
For example:
  • instead of saying 'application should encode XYZ value', you should say that 'XYZ value is not encoded'
  • don't say an 'application shouldn't be vulnerable to XSS or SQL injection', you say ' application is vulnerable to SQL injection'. In this case SQL Injection is a feature of the application, and while the application allows SQL Injection, the application is working as designed (whether that is intended or not, that is a different story :) )

Know what was not tested

(from Software Quality Book)

When you're reading an application security report (like a pentest), one of the most important questions that you should get an answer to is 'What tests did they run?'. This is especially important for the tests (i.e. exploits) they tried to run but were unsuccessful.
The report(s) will show what was successful, but that's only half (or potentially less than half) of what you want to know.

Broken Tests Aren't The Problem

(from Software Quality Book)

It is quite worrying how many times you hear complains about test's execution (for example their speed or how hard they are to maintain)
These complains can be so strong, that they can even question if the tests are 'worth it'? (i.e. the negative sides of maintaining the tests are higher than its benefits)
This is very dangerous because it is promoting the idea that it is OK not to test your code. And that is just crazy!

Thursday, 22 September 2016

"Turning TDD upside down - For bugs, always start with a passing test" - v0.5 Sep 2016

Here is the presentation I delivered at LSCC (London Software Craftsmanship Community) on the 22nd Sep 2016

Title: Turning TDD upside down - For bugs, always start with a passing test
Description: Common workflow on TDD is to write failed tests. The problem with this approach is that it only works for a very specific scenario (when fixing bugs). This presentation will present a different workflow which will make the coding and testing of those tests much easier, faster, simpler, secure and thorough'