sandboxing Rust?

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

sandboxing Rust?

Josh Haberman
Is it a design goal of Rust that you will be able to run untrusted
code in-process safely?

In other words, by whitelisting the set of available APIs and
prohibiting unsafe blocks, would you be able to (eventually, once Rust
is stable and hardened) run untrusted code in the same address space
without it intentionally or unintentionally escaping its sandbox?

(Sorry if this a FAQ, I couldn't find any info about it).

Thanks,
Josh
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Corey Richardson
Rust's safety model is not intended to prevent untrusted code from
doing evil things.

On Sat, Jan 18, 2014 at 10:18 PM, Josh Haberman <[hidden email]> wrote:

> Is it a design goal of Rust that you will be able to run untrusted
> code in-process safely?
>
> In other words, by whitelisting the set of available APIs and
> prohibiting unsafe blocks, would you be able to (eventually, once Rust
> is stable and hardened) run untrusted code in the same address space
> without it intentionally or unintentionally escaping its sandbox?
>
> (Sorry if this a FAQ, I couldn't find any info about it).
>
> Thanks,
> Josh
> _______________________________________________
> Rust-dev mailing list
> [hidden email]
> https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Jack Moffitt
> Rust's safety model is not intended to prevent untrusted code from
> doing evil things.

We'd like something like this for Servo, but I think the idea was to
see if we couldn't use NaCl to do this kind of sandboxing. The NaCl
devs seemed to think this might be interesting as well.

jack.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Scott Lawrence
In reply to this post by Corey Richardson
On Sat, 18 Jan 2014, Corey Richardson wrote:

> Rust's safety model is not intended to prevent untrusted code from
> doing evil things.

Doesn't it succesfully do that, though? Or at least with only a small amount
of extra logic? For example, suppose I accept, compile, and run arbitrary rust
code, with only the requirement that there be no "unsafe" blocks (ignore for a
moment the fact that libstd uses unsafe). Barring compiler bugs, I think it's
then guaranteed nothing bad can happen.

It seems to me that (as usual with languages like Rust) it's simply a mildly
arduous task of maintaining a parallel libstd implementation to be used for
sandboxing, which either lacks implementations for dangerous functionality, or
has them replaced with special versions that perform correct permissions
checking. That, coupled with forbidding unsafe blocks in submitted code,
should solve the problem.

I could be completely wrong. (Is there some black magic I don't know?)

>
> On Sat, Jan 18, 2014 at 10:18 PM, Josh Haberman <[hidden email]> wrote:
>> Is it a design goal of Rust that you will be able to run untrusted
>> code in-process safely?
>>
>> In other words, by whitelisting the set of available APIs and
>> prohibiting unsafe blocks, would you be able to (eventually, once Rust
>> is stable and hardened) run untrusted code in the same address space
>> without it intentionally or unintentionally escaping its sandbox?
>>
>> (Sorry if this a FAQ, I couldn't find any info about it).
>>
>> Thanks,
>> Josh
>> _______________________________________________
>> Rust-dev mailing list
>> [hidden email]
>> https://mail.mozilla.org/listinfo/rust-dev
> _______________________________________________
> Rust-dev mailing list
> [hidden email]
> https://mail.mozilla.org/listinfo/rust-dev
>

--
Scott Lawrence
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Huon Wilson
In reply to this post by Jack Moffitt
On 19/01/14 14:23, Jack Moffitt wrote:

>> Rust's safety model is not intended to prevent untrusted code from
>> doing evil things.
> We'd like something like this for Servo, but I think the idea was to
> see if we couldn't use NaCl to do this kind of sandboxing. The NaCl
> devs seemed to think this might be interesting as well.
>
> jack.
> _______________________________________________
> Rust-dev mailing list
> [hidden email]
> https://mail.mozilla.org/listinfo/rust-dev

Isn't the "correct" way to do this to use the OS's security features?

FWIW, https://github.com/mozilla/rust/issues/6811 covers allowing
spawning tasks as sandboxed tasks, and strcat wrote up something about
sandboxing on Linux for Servo:
https://github.com/mozilla/servo/wiki/Linux-sandboxing


Huon
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Corey Richardson
In reply to this post by Scott Lawrence
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence <[hidden email]> wrote:
> On Sat, 18 Jan 2014, Corey Richardson wrote:
>
>> Rust's safety model is not intended to prevent untrusted code from
>> doing evil things.
>
>
> Doesn't it succesfully do that, though?

It might! But Graydon was very adamant that protection from untrusted
code was/is not one of Rust's goals.

I can't think of anything evil you could do without unsafe code, and
assuming a flawless compiler.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Daniel Micay
In reply to this post by Scott Lawrence
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence <[hidden email]> wrote:

> On Sat, 18 Jan 2014, Corey Richardson wrote:
>
>> Rust's safety model is not intended to prevent untrusted code from
>> doing evil things.
>
>
> Doesn't it succesfully do that, though? Or at least with only a small amount
> of extra logic? For example, suppose I accept, compile, and run arbitrary
> rust code, with only the requirement that there be no "unsafe" blocks
> (ignore for a moment the fact that libstd uses unsafe). Barring compiler
> bugs, I think it's then guaranteed nothing bad can happen.

Even a small subset of Rust hasn't been proven to be secure. It has
plenty of soundness holes left in the unspoken specification. It will
eventually provide a reasonable level of certainty that you aren't
going to hit one of these issues just writing code, but it's not even
there yet.

> It seems to me that (as usual with languages like Rust) it's simply a mildly
> arduous task of maintaining a parallel libstd implementation to be used for
> sandboxing, which either lacks implementations for dangerous functionality,
> or has them replaced with special versions that perform correct permissions
> checking. That, coupled with forbidding unsafe blocks in submitted code,
> should solve the problem.

You'll need to start with an implementation of `rustc` and `LLVM` free
of known exploitable issues. Once the known issues are all fixed, then
you can start worrying about *really* securing them against an
attacker who only needs to find a bug on one line of code in one
poorly maintained LLVM pass. Even compiling untrusted code with LLVM
without running it is a very scary prospect.

> I could be completely wrong. (Is there some black magic I don't know?)

Yes, you're completely wrong. This kind of thinking is dangerous and
how we ended up in the mess where everyone is using ridiculously
complex and totally insecure web browsers to run untrusted code
without building a very simple trusted sandbox around it. Many known
exploits been discovered every year, and countless ones kept private
by entities like nation states and organized crime.

The language isn't yet secure and the implementation is unlikely to
ever be very secure. LLVM is certainly full of many known exploitable
bugs and many more unknown ones. There are many known issues in
`rustc` and the language too.

I don't see much of a point in avoiding a process anyway. On Linux, it
close to no overhead over a thread. Giving up shared memory is an
obvious first step, and the process can be restricted to making
`read`, `write` and `exit` system calls.

The `chromium` sandbox isn't incredibly secure but it's not insane
enough to even render from the same process as where it's compiling
JavaScript. Intel open-source Linux driver is reaching the point where
an untrusted process can be allowed to use it, but it's not there yet
and any other video driver on any of the major operating systems is a
joke.

You're not going to get very far if you're not willing to start from
process isolation, and then build real security on top of it. Anyway,
the world doesn't need another Java applet.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Patrick Walton
I think this is too strongly worded. While I agree that naively running untrusted Rust code is not a good idea at all, I think that language level security is not unachievable. It is absolutely an utmost priority to get to the point where the language is secure, and Rust treats memory safety issues with the same severity as security bugs. Even though we presently strongly advise against it, we intend to pretend that the point of Rust is to run untrusted code *as far as triaging issues and bugs is concerned*.

Emscripten/OdinMonkey and PNaCl have demonstrated that effectively hardening LLVM is possible for untrusted code. (Of course, there is a performance penalty for this.)

Finally, I disagree that processes are always the right solution here. If processes were as flexible as threads, there would be no need for threads! The trouble with isolation through processes is that isolation at the process level makes shared memory more difficult. For isolation with complex use of shared memory (mutexes and cvars), you really want language-level safety.

Patrick

Daniel Micay <[hidden email]> wrote:
On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence <[hidden email]> wrote:
On Sat, 18 Jan 2014, Corey Richardson wrote:

Rust's safety model is not intended to prevent untrusted code from
doing evil things.


Doesn't it succesfully do that, though? Or at least with only a small amount
of extra logic? For example, suppose I accept, compile, and run arbitrary
rust code, with only the requirement that there be no "unsafe" blocks
(ignore for a moment the fact that libstd uses unsafe). Barring compiler
bugs, I think it's then guaranteed nothing bad can happen.

Even a small subset of Rust hasn't been proven to be secure. It has
plenty of soundness holes left in the unspoken specification. It will
eventually provide a reasonable level of certainty that you aren't
going to hit one of these issues just writing code, but it's not even
there yet.

It seems to me that (as usual with languages like Rust) it's simply a mildly
arduous task of maintaining a parallel libstd implementation to be used for
sandboxing, which either lacks implementations for dangerous functionality,
or has them replaced with special versions that perform correct permissions
checking. That, coupled with forbidding unsafe blocks in submitted code,
should solve the problem.

You'll need to start with an implementation of `rustc` and `LLVM` free
of known exploitable issues. Once the known issues are all fixed, then
you can start worrying about *really* securing them against an
attacker who only needs to find a bug on one line of code in one
poorly maintained LLVM pass. Even compiling untrusted code with LLVM
without running it is a very scary prospect.

I could be completely wrong. (Is there some black magic I don't know?)

Yes, you're completely wrong. This kind of thinking is dangerous and
how we ended up in the mess where everyone is using ridiculously
complex and totally insecure web browsers to run untrusted code
without building a very simple trusted sandbox around it. Many known
exploits been discovered every year, and countless ones kept private
by entities like nation states and organized crime.

The language isn't yet secure and the implementation is unlikely to
ever be very secure. LLVM is certainly full of many known exploitable
bugs and many more unknown ones. There are many known issues in
`rustc` and the language too.

I don't see much of a point in avoiding a process anyway. On Linux, it
close to no overhead over a thread. Giving up shared memory is an
obvious first step, and the process can be restricted to making
`read`, `write` and `exit` system calls.

The `chromium` sandbox isn't incredibly secure but it's not insane
enough to even render from the same process as where it's compiling
JavaScript. Intel open-source Linux driver is reaching the point where
an untrusted process can be allowed to use it, but it's not there yet
and any other video driver on any of the major operating systems is a
joke.

You're not going to get very far if you're not willing to start from
process isolation, and then build real security on top of it. Anyway,
the world doesn't need another Java applet.


Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev

--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Daniel Micay
On Sun, Jan 19, 2014 at 3:34 AM, Patrick Walton <[hidden email]> wrote:
>
> Emscripten/OdinMonkey and PNaCl have demonstrated that effectively hardening
> LLVM is possible for untrusted code. (Of course, there is a performance
> penalty for this.)

PNaCl is primarily a low-level sandboxing technology though. The
frontend languages/libraries, analysis/optimization passes, etc. do
not have to be correct. The scope of what they have to verify has been
drastically cut down to what is essentially a CPU architecture. I
think it's unlikely that the core implementation itself will have many
(if any) vulnerabilities. Once you throw in *all* of the Pepper API,
it's communicating with a huge codebase and loses the strong level of
security.

I don't think you can make a very strong claim that browser JavaScript
engines are secure sandboxes. There's an endless stream of *known*
security vulnerabilities for every major browser and the scope is far
too large. You can't trust a technology like that to be secure because
for every security researcher disclosing vulnerabilities, there are
many more being paid to keep it secret. The fact that vulnerabilities
are disclosed as a steady rate proves that browsers are totally
insecure.

> Finally, I disagree that processes are always the right solution here. If
> processes were as flexible as threads, there would be no need for threads!
> The trouble with isolation through processes is that isolation at the
> process level makes shared memory more difficult. For isolation with complex
> use of shared memory (mutexes and cvars), you really want language-level
> safety.

If there was a tiny subset of Rust it could be compiled down to with a
simpler backend (not LLVM), then I think you could talk seriously
about the language offering a secure sandbox. I don't think it is even
obtainable with a codebase as large as librustc/LLVM. A pretty high
number of issues in the Rust and LLVM trackers could be considered
security issues, and those are just the ones we know about.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Daniel Micay
On Sun, Jan 19, 2014 at 4:17 AM, Daniel Micay <[hidden email]> wrote:
>
> If there was a tiny subset of Rust it could be compiled down to with a
> simpler backend (not LLVM), then I think you could talk seriously
> about the language offering a secure sandbox. I don't think it is even
> obtainable with a codebase as large as librustc/LLVM. A pretty high
> number of issues in the Rust and LLVM trackers could be considered
> security issues, and those are just the ones we know about.

Of course, the entire compiler still has to be free of vulnerabilities
itself. Even if it targets a backend assumed to be correct, the
attacker still has the entire surface area of libsyntax/librustc to
play with.
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev
Reply | Threaded
Open this post in threaded view
|

Re: sandboxing Rust?

Josh Haberman
In reply to this post by Patrick Walton
On Sun, Jan 19, 2014 at 12:34 AM, Patrick Walton <[hidden email]> wrote:
> I think this is too strongly worded. While I agree that naively running
> untrusted Rust code is not a good idea at all, I think that language level
> security is not unachievable. It is absolutely an utmost priority to get to
> the point where the language is secure, and Rust treats memory safety issues
> with the same severity as security bugs.

Cool, this is really what I was looking to know. For my own purposes
I'm not thinking so much of running entirely untrusted code, but more
like "pretty trusted" code: like the level of trust you have in a
framework/library that you download and use in your project; where you
didn't write the code yourself but you can read it first if you want
(and others probably have); where there is reputation on the line and
it would be tricky to hide an exploit in plain sight.

For this scenario you would care first and foremost that the code is
highly unlikely to escape inadvertently, and resistance to intentional
attack is just icing on the cake. From the above it sounds like the
goal is to take safety seriously, which would seem to make it entirely
appropriate for this purpose (eventually, once Rust is stable).

Thanks,
Josh
_______________________________________________
Rust-dev mailing list
[hidden email]
https://mail.mozilla.org/listinfo/rust-dev