Monday, 13 July 2015

Security ramble

All the recent hacks where mass amounts of personal data has been exposed made me wonder, whether in time public perception of privacy and data security will change.
What I mean is people nowadays seem very much surprised and distressed whenever their data gets stolen, be it photos from iCloud, or SSN and address info from governmental databases, or PHIs from health and insurance providers. It's almost like your average Joe or Jane do not expect it to ever happen... but data gets stolen all the time. And I don't see any reason for these hacks to stop in the near future.

Wouldn't it be more reasonable to assume that every information storage system will be hacked, and any data will be stolen? This assumption will give you state of mind and tools to concentrate on active monitoring and mitigation plan, whereas nowadays it looks like people mostly concentrate on preventing the hack (some big hacks went unnoticed for many months!).

I would much prefer if people responsible for the systems where my personal data is stored:

  • Assumed they are gonna be hacked.
  • Made sure when it happens they will notice (automated smart monitoring systems).
  • Made sure it is complicated and/or expensive to use stolen data to harm me (block bank accounts, make it possible to cancel ID easily, make it hard to make sense of my PHI without some key that is also easy to cancel/revoke, make sure devices that can physically harm me have inbuilt protection against that physical harm - e.g. e.g. it shouldn't be possible to program heart pacifiers to murder its carrier).
  • Worked on making the attack expensive (we are gonna be hacked, but it will be annoying, frustrating and expensive process for a hacker) and long (store unrelated data in different disconnected places, so you have to do a separate hack for each of the pieces).
And I myself am assuming my data can be stolen at any point, so I am trying to behave with that assumption in mind:
  • There are no private emails or photos that, if made public, will harm me - I do not put stuff that can harm me in the internet. I don't say shitty things about people behind their backs. I do not lie. Not that I naturally feel the need to do all that stuff, but assuming you can get exposed at any moment does provide additional motivation to withhold from being a dick.
  • My money are stored in different places, and my cards are not connected to my savings.
  • My most important email account is behind a 2fa authentication, and it is connected to my phone, so if it is compromised, I will notice, and I can block it fast.
  • And last but not the least, I am mentally prepared it can all fail me. If that happens it will mess me up a bit and create some hassle to block/change/restore cards, accounts and IDs, but it will not be the end of the world.

Thursday, 9 April 2015

Oracle troubles and findings: tuning experience

I am not an Oracle DBA. But with performance testing I more often than not end up creating the whole environment, which means setting up Oracle server as well. Since I am not testing Oracle server specifically, I usually only tune it enough for it to not be the bottleneck. Lucky for me Oracle server is actually pretty cool compared to applications I test, and only gets to be a bottleneck in scalability testing where it serves multiple application servers. Still... recently I've bumped into a set of correlated problems that led me to tuning effort on the Oracle server itself.
Sponsored by Internet - meaning that all that I've done was googled in the internet and applied with fingers crossed.

So, here it goes.

First problem was the following: when I went from 4 application server nodes to 6 application server nodes, response times high rocketed. It was obviously an infrastructural problem, and after a while Oracle was the only culprit. Unlike usually, CPU usage wasn't that high on the oracle, so I had to dig a bit deeper, and guess what I found: concurrency issues such as "cursor pin S on X"!

To avoid doing hard parses, oracle puts any new query and it's execution plan into a shared cursors tree. Access to that tree is controlled by mutex pins algorithm. Only one session can grab a mutex pin for a specific cursor at a time. Also, similar queries are being put as leaves with a common root, and my understanding is the whole root is being pinned during any updates in the tree...

Anyway, it was happening for two reasons:
1. Application under test was using queries with literals where it should've been using prepared statements.
2. My oracle version (11.2.0.1) had known issues around shared cursors tree.

So I've updated to 11.2.0.4 and set CURSOR_SHARING=FORCE. What this option does is it replaces all literals in all queries by system variables, which effectively means that all the queries that only differ in literals are now treated as the same query, they have the same cursor, and cursors tree doesn't need to be constantly updated. This took care of concurrency issues. It also created another problem.

Suddenly one of my other queries which was never a problem before, a very simple and well behaved query, became a huge bottleneck. It would take thousands of CPU cycles to execute where before it was tens of cycles! This one took days of my time, numerous experiments that slightly improved the situation but didn't solve the main issue, and in the end I had to go to DBAs for help.

Turned out that innocent "1=2" in that query (which was there because the query was dynamically generated with optional conditions) was replaced by something like ":SYS_0=:SYS_1", and that meant Oracle was grabbing those variables and evaluating the clause again and again for each row in a huge table (I would think it would do it once, understand it's FALSE and leave it at it - but no).
This was of course the result of CURSOR_SHARING=FALSE. I'll say in advance, that I got exactly the same behaviour with CURSOR_SHARING=SIMILAR.

The suggested fix in my case was either to switch to prepared statements everywhere so that we don't need to use CURSOR_SHARING=FALSE/SIMILAR, or to remove "1=2" from the query that suffered from that setting. Can't have it both.

Other useful tuning:

  • Increasing shared_pool_size.
  • Increasing session_cached_cursors.
  • Weirdly enough, locking statistics on selected columns helped.

Sunday, 1 February 2015

HAProxy balancing https backends

Recently I needed to configure load balancing in my environment, where I needed to balance between few https servers with sticky sessions enabled. I looked in the haproxy manual, I googled, I asked - and for days there was no making it work.

Most of the haproxy configuration examples out there are for the case when client connects to haproxy via https, and then haproxy decrypts it and balances requests between http backends. Few examples around https backends assumed that no sticky sessions are needed, so they all sit on top of tcp. To this day I have not found a guide or an example of how to configure what I need, so once I figured out how to do that, I thought I'd share.

So the way you do it is:
0) You need haproxy 1.5+. haproxy before that did not support https on its own.
1) A client connects to haproxy via https. There need to be a certificate+private key combination (that client would trust) on the haproxy server.
2) HAProxy decrypts the traffic and attaches a session cookie. If the cookie is already there, it knows where to send the request further.
3) HAProxy encrypts the traffic again before sending it to backend (where backend can decrypt it).
4) and the other way around.

And the configuration for that is:

  • For both backend and frontend you should have mode http.
  • In the bind line you need to add ssl cert <path to haproxy certificate + private key file>.
  • In the backend section you need to set load balancing algorithm - e.g. roundrobin or leastconn.
  • In the backend section you also need to set a cookie - e.g. cookie JSESSIONID insert indirect no cache.
  • For each server you need to say "ssl" after the ip, and then also set a cookie.
For me the one part I couldn't find in any guides was to put "ssl" in the server line (as well as in the bind line). I might have missed it somewhere in the not-so-helpful haproxy manual.

One thing I didn't go into was setting up a proper certificate on a backend servers in my environment, because of course in test environment they are self signed and all that. In order to work around it, just add another global setting to the haproxy settings: ssl-server-verify none.

And here's the example of the config file frontend & backend sections to make it work:

frontend  main
    mode http
    bind :443 ssl crt /etc/haproxy/cert.pem
    default_backend app

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    mode http
    balance     roundrobin
    option httpchk GET /concerto/Ping
    cookie JSESSIONID insert indirect nocache
    server  app1 10.0.1.11:443 ssl check cookie app1
    server  app2 10.0.1.12:443 ssl check cookie app2
    server  app3 10.0.1.13:443 ssl check cookie app3
    server  app4 10.0.1.14:443 ssl check cookie app4