Archive for the ‘Security’ Category.

In case you haven’t noticed: VIFF 0.1 is released

In my last post I asked you what name you preferred for my cryptographic runtime, and the winner is VIFF, which stands for Virtual Ideal Functionality Framework. Now I just need someone to make a logo with a cute little dog that says “Viff!” :-)

I packaged things up in a 0.1 release — VIFF is now online at http://viff.dk/ with the Mercurial repository at http://hg.viff.dk/viff/. If you are interested in the development of VIFF, then consider subscribing to new releases on Freshmeat. There will be a mailinglist up as soon as Gmane approves my application, stay tuned!

Please help me choose a name for my project

I have been busy with a new project this summer, a project that will be part of my PhD in cryptographic protocols. It is working well, but I am not satisfied with the name I have come up with so far.

It is a library which enables you to write multi-party computations (MPC) in an easy way. A secure MPC protocol is a protocol between a number of players who seek to execute some joint computation, but done in a way in which they reveal nothing about their inputs. If the computation is the evaluation of a function f, then imagine each player Pi holding an input xi. When the protocol is finished, all players must know y = f(x1, x2, …, xn), but nothing more.

As an example where MPC is helpful, consider a bunch of companies that want to know how they compare to each other. So they want to compute their average profit, but are of course unwilling to share the private information about their expenses and incomes. This is the problem of benchmarking and traditionally this has been solved by having the companies reveal their sensitive information to a mutually trusted third-party. This could be a consulting company which has been paid so much money by the benchmarking participants that they can trust the consulting company not to cheat (the companies have essentially bribed the consulting company to be honest).

Paying a third-party so much money that he or she has no incentive to collude with a player is of course an expensive option. A secure multi-party computation can do the same, but without a trusted third-party. The protocol is designed in such a way that it acts as if there was a trusted third-party, a so-called ideal functionality present. An ideal functionality (IF) should be thought of as a computer which cannot be hacked and which faithfully carries out the program put into it. The players can therefore trust this computer and should simply reveal their private inputs to it.

In real life there is not such computer, but the MPC protocol creates a situation that look exactly as if there had been. This is the definition of security: the real protocol must look exactly as a protocol done in an ideal world. Because no attacks can occur in the ideal world (the IF cannot be attacked by definition) then we conclude that no attacks can occur in the real world as well. And so the protocol is called secure.

My library is written in Python and program written using it looks like this:

import sys
from X.field import GF
from X.config import load_config
from X.runtime import Runtime
from X.util import dprint

Z31 = GF(31)
my_id, conf = load_config(sys.argv[1])
my_input = Z31(int(sys.argv[2]))

rt = Runtime(conf, my_id, 1)
x, y, z = rt.shamir_share(my_input)
result = rt.open(rt.mul(rt.add(x, y), z))

dprint(”Result: %s”, result)
rt.wait_for(result)

This program starts by including some stuff from X, which stands for the package name of my library. It is this X that I want to see replaced by something else. The program then defines a field for the computation, loads the configuration and input. A Runtime is then created. The Runtime is used for all computations, it has methods for addition, multiplication, comparison and so on. In this example we compute (x+y)*z. The result is opened, printed and finally we ask the runtime to wait for the result.

The last point where we wait for a result is necessary since my library is asynchronous. The wait_for method goes into an event loop and only returns when the variables given have received a value. I use Twisted for the asynchronous infrastructure and it has worked extremely well.

So, if you’re with me so far, then you should have at least a rudimentary knowledge about MPC and what it is good for. I have already some name suggestions and I hope to get some feedback on them (listed in no particular order):

  • NoTTP, short for No Trusted Third-Party. This is what MPC does: it removes the trusted third-party. But does it look good if you write from nottp.field import GF256? Also, the name almost sounds like NoTCP which could be some weird project that loves UDP :-)

  • PySMPC, short for Python Secure Multi-Party Computation. The library is written in Python and does SMPC. One might drop the “S” and go with PyMPC since nobody wants to deal with in-secure MPC anyway :-) I don’t like that the name ties the library to Python since I might want to rewrite it in another language in the future.

  • Trent or NoTrent. Accourding to Wikipedia, Trent is sometimes used as the name of the trusted arbitrator in cryptographic protocols (like Alice and Bob is used instead of A and B). So this library could be said to give you a virtual “Trent” and help you get rid of a real one. I don’t like the word “Trent” since I don’t think it is that widely used.

  • VIFF, short for Virtual Ideal Functionality Framework. The library helps you create protocols that look exactly as if there has been an IF present. Therefore I think it can be said to create a virtual ideal functionality. Accourding to Google, VIFF mostly stands forVancouver International Film Festival“.

  • AMPC, short for Asynchronous Multi-Party Computation. This emphasizes the asynchronous nature of the library. I think it is somewhat difficult to pronounce “AMPC”.

Any other suggestions? Which name do you like the most? Please vote by leaving a comment! (Those of you who already know the name I have used so far are kindly asked not to reveal it — I want to collect some opinious first.)

By the way: instead of the abbreviations, I would prefer a name like “Twisted” or “Python” which can be pronounced and which people know how to spell and capitalize. There is another project in this area called FairPlay and I think this is a very good name: easy to remember, it can be abbreviated to just FP, and it actually says a bit about the project. So if you could suggest something along the lines of that it would be great! :-)

CPT exam tomorrow

I have my exam in CPT tomorrow — Thomas and I have been practicing all day and it have really helped. So I now feel pretty confident that I know what I need to know about commitment schemes, zero-knowledge interactive proof systems and arguments, Σ-protocols, electronic voting, electronic cash, and secure multiparty computations. That was the six exam topics :-)

By the way, I believe Rune is still accepting bets in the Exam Game

Studying for FMIS

Today I’m at the [ETH][] to study for the exam in Formal Methods for Information Security, which will be next Tuesday.

I’m currently looking at BAN logic (see also Wikipedia), a nice system for proving (a limited set of) properties about security protocols. It deals with the beliefs of the participants in the protocol in a formal way. This allows you to verify that the goals of the protocol can be fulfilled based on the initial assumptions given. An example from BAN could be a rule that says:

If (P believes (Q controls X))
and (P believes (Q believes X))
then P believes X.

Here one should interpret the keyword controls as “has jurisdiction over”. An example could be a server S which has authority over the public keys for B:

If (A believes (S controls public key for B))
and (A believes (S believes public key for B = KB))
then A believes public key for B = KB.

So this rule expresses the trust of A in S. But for this rule to be applicable A needs to believe that S believes public key for B = KB. There’s a rule for introducing such beliefs about other participants beliefs:

If (P believes (X is fresh))
and (P believes (Q said X))
then P believes (Q believes X)

This rule is based on an assumption that you will not say something which you do not believe. There are then more rules stating when you can believe that someone said something (for example, when that something was signed, and you believe in the key used for the signature).

From this big set of rules, one can make more and more deductions and hopefully derive the goals of the protocol. A simple, but pretty cool concept.

WordPress insecurity

WordPress logo Another computer related thing needing attention when I got home was [WordPress][]… version 1.5.2 has just been released to fix yet another security hole, although their announcement has no specifics (as usual).

They write “We’re happy to announce that a new version of WordPress is now available for download.” How can they be happy that a security hole has been found in their “extremely stable 1.5 series” once again?! They have released version 1.5.1 (May 9th, renamed to version 1.5.1.1), 1.5.1.2 (May 27th), 1.5.1.3 (June 29th), and now 1.5.2 (August 14th) in response to security holes being found.

I think that’s a bit too much for me to call this think “extremely stable” (I obviously believe that security is an important feature of a “stable” application.) It’s good that they react to the security holes and they try to fix them fast, but I don’t like the way they just write that they have “addressed all the security issues that have been circulating the past few days”. Some questions immediately spring to mind:

  • How many security holes were there?

  • What was the nature of the hole(s)?

    • Could they “just” change the database? If so, which parts of it?

    • Could they upload files to my server? If so, could they overwrite my previous files?

  • How can I see in my log files if I’ve been exploited?

Instead of being vague I would like to see specific information about the problems. Browsing through the changesets doesn’t really help either, for the WordPress developers seems to make a point out of obscuring their fixes.

Take this changeset (revision 2779) for example, which committed on the 1.5 branch two days before the announcement of version 1.5.2 with the innocent message of “Move above”. Some lines are really moved up a little further in wp-settings.php — they deal with undoing the work of the infamous register_globals setting in PHP. But the lines are not just moved, an extra check is added to ensure that the variable $table_prefix isn’t unset. Why? Is this one of the security problems they’re talking about? Given the extreme lack of comments we can only guess…

Or maybe the fix was smugled in with revision 2780, together with fixes for seven small bugs and feature requests? The change to wp-admin/users.php in that changeset involve replacing

$id = $_GET['id'];

into

$id = (int) $_GET['id'];

and to my eyes this could be the fix they’re talking about. Especially since $id is used in an SQL query next… So if this analysis is correct then WordPress 1.5.2 was sent out to guard against an SQL injection attack. If anybody else has information about this then I would of course be interested!