Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
BSD Operating Systems

Fix the Bugs, Secure the System 346

LiquidPC writes: "OpenBSD's Louis Bertrand has put his MUSESS 2002 presentation online, entitled Fix the Bugs, Secure the System. Does an overview of OpenBSD, then explains Format String Ugliness, Buffer Overflows, The Wrong Way to Fix Overflows, along with numerous other things."
This discussion has been archived. No new comments can be posted.

Fix the Bugs, Secure the System

Comments Filter:
  • Script Kitties (Score:4, Interesting)

    by Mattygfunk ( 517948 ) on Sunday February 24, 2002 @08:02PM (#3062664) Homepage
    It was a bit tedious flicking through all those slides but the final one [openbsd.org] did bring a smile to my face.
    • by Anonymous Coward
      The skeleton in front just left of the middle? The one with a beak and wings?

      That was a penguin. :-)
      • On the shirt [openbsd.org] which have a bigger version of this picture. You can see more than that.

        • Stripe of right top fish is made of Sun logos
        • Yellow left top fish is covered with Windows logos
        Of course, only the blowfish will not get eaten by the cat (because of its spikes).

    • Great logo, I love that damn fish.

      Actually, after seeing the Fish logo a little while ago (5-6 months), I thought geez ... those guys are either very arogant SOB's or VERY sure of their software with such claims (turns out they are both!!:) ). Since putting it on an old machine which is now my home nat device, I have been a OBSD fan. Even moved a box into production.

      Cheers OBSD team. :) the logo does help IMHO.
  • Sure, the kiddies can still twiddle with system calls, but if they can't put _their_ code somewhere where _they_ can execute it, it raises the difficulty level of creating an exploit by an order of magnitude. Sure, false sense of security, blah blah blah, but really, shouldn't this (non-exec stack) be a standard feature of any OS that purports to be secure?
    • by Saint Nobody ( 21391 ) on Sunday February 24, 2002 @08:50PM (#3062884) Homepage Journal

      granted, a non-executable stack makes it significantly harder to exploit a buffer vulnerability, but it's not impossible. you can also put your shellcode in environment variables, in the heap, or various other places. if you wanted to follow your line of reasoning to completion, you'd have to have an isolated code segment, marked read-only, and everything else marked non-executable. of course, then we have the issue of how to handle run-time dynamic loading, and programs like vmware--pretty much anything that gets machine code from a source outside of itself and the libraries that are linked in at compile time.

      i do agree with the idea of a non-executable stack, though. it's just regarded far too often as a panacea for buffer overflows.

      • by YU Nicks NE Way ( 129084 ) on Sunday February 24, 2002 @09:15PM (#3062960)
        Standard buffer overflow exploits don't execute the stack. The most common form (so-called single instance exploits) alter the return point from a subroutine so that a particular command (also stored by the malicious code) gets executed. (E.g. in a Unix system, the attacker climbs around until he finds a call to exec, and branches to the exec with a call to /bin/sh in the right place on the stack.) The second most-common form consists of exploits that cause a function pointer to be replaced in a heap variable. Even if these exploits required the insertion of executable code -- and I don't know of any cases where they do -- a non-executable stack won't help against a heap attack.
    • A better idea would be to have two stacks - one for parameter data, and another for executable data. This way, an overflow in a variable couldn't overwrite executable code.
  • Damn it's tough to code in C these days, keeping track of all the stuff that one needs to to be reasonably secure.

    Not to mention the added overhead of making the system secure from semantic errors. Yeesh, it's a good think I get paid a lot for my C work.

    But that's all okay, becuase (finally) technology, like Java, C# (okay this one sucks but whatever), etc that will help out and provide a truly _secure_ development platform.

    I jsut hope they still pay me as much when this stuff finally gets easy, like it should be.
    • Yes, it's tough to code in C and still keep things secure, especially for inexperienced programmers. But people developing at the OS level need the speed and performance of C. We can't get that amount of speed with Java, C#, etc. There's always a trade-off.

      Interestingly, there's a C dialect called Cyclone [att.com] by AT&T which tries to give the best of both worlds. It doesn't allow careless code (that becomes buffer overflows, etc.) but it doesn't sacrifice performance either.
      • Performance and C (Score:4, Insightful)

        by Tom7 ( 102298 ) on Sunday February 24, 2002 @10:55PM (#3063250) Homepage Journal

        I don't agree with your assessment that safe high-level languages necessarily perform badly. (What is the difference between speed and performance?) But, let's forget about that.

        What is "OS-level" about an ftp daemon? BIND? Mozilla? Gnutella? All sorts of network (and other) applications are written in C, even though there certainly isn't any need for performance or device-level bit manipulation. (At least, I would place security way above performance!)

        Cyclone is actually from Cornell, by the way. It's a good project for moving systemsy people away from C, but there are already mature programming languages that are not slow, and yet are secure by default. (Try SML or O'Caml, for instance.)
        • Disclaimer: please language zealots avoid this thread, it's about choosing the best tool for having secure programs not about religions.

          I've read many good things about O'Caml (nearly as fast as C!), so I tryed to read an online book to learn the language, but I've failed :-( (even though the paper was in French and I'm French)..

          Why? Too different from a "normal" C-like language: I find functionnal-type programs unreadable..

          I know with O'Caml you're not really restricted to functionnal style programs, but the paper was pushing this style quite strongly.

          I like using C, C++, Java (well I don't like Java but I have no problem using it), Perl, Ruby, Python..
          But I can't grasp O'Caml..

          So I'm not really sure that O'Caml (and the other functionnal language) would have a great success replacing the C-like language, maybe Ada or Modula would be easier "replacement languages".

          • Learn it in Python. Really. Python 2.2 offers a whole host of lovely functional-programming features. Continuances, even. :)

            I prefer to write functional code in LISP or Scheme, but I won't sneer at someone who uses Python functionally. It might lessen the learning curve for you, let you experiment around with functional programming, and then use what you learn there in Scheme, LISP or Ocaml.
            • I have no problem using functionnal style in Python or Ruby because the "environement" is declarative.

              So using functionnal style here and there is not difficult, and it is used even more in Ruby than it is in Python.

              But it's reading "pure functionnal programs" that I find very, very hard.

              Unfortunately whereas O'Caml is nearly as performant as C (compiled O'Caml of course), Ruby is much slower..

              I tend to prefer Ruby to Python but both are really equivalent.
            • I think you mean, "continuations". ;)

              I am glad to see where python is going; it seems to have a rather clean design, and they like to take good ideas out of the programming languages community. But it is still "just" a scripting language; its semantics preclude an efficient implementation (and make it harder to develop large programs).
    • Until you wish to open a file/device. Languages are never truely secure.... programming methods are. People are people and will make mistakes that cause security problems.
      • Languages are never truely secure.... programming methods are. People are people and will make mistakes that cause security problems.

        Why do you think programming methods are truely secure? People are people and will make mistakes that cause security problems. But in few languages besides C/C++ will you ever have a buffer overflow. Languages are not panceas; they will not solve every problem, but they are one step to producing more secure code.
  • by __past__ ( 542467 ) on Sunday February 24, 2002 @08:07PM (#3062688)
    The only thing I'd like to see from the OpenBSD guys would be a write-up of the gathered wisdom, in form of a "Code-auditing Howto". Unless everybody starts using OBSD (not due this week, unfortunatly), it would be nice if they would share their knowledge so that other platforms like, say, Linux, could benefit.

    But then I guess producing a high quality operating system keeps then busy enough...

    • In the meantime, the best way i've found to identify possible poor security practices in my code is to examine known problems in the code of others.

      Which is my first argument for full disclosure of security issues. Not to mention security changelogs.
    • Unless everybody starts using OBSD (not due this week, unfortunatly), it would be nice if they would share their knowledge so that other platforms like, say, Linux, could benefit.

      As if they'd pay attention. And before you mod that as flamebait, ask yourself why strlcpy() still isn't part of glibc..

      • by Arandir ( 19206 ) on Sunday February 24, 2002 @10:36PM (#3063200) Homepage Journal
        ask yourself why strlcpy() still isn't part of glibc

        Because if it isn't invented at GNU they won't use it?
      • by dvdeug ( 5033 ) <dvdeug@@@email...ro> on Monday February 25, 2002 @02:24AM (#3063783)
        As if they'd pay attention. And before you mod that as flamebait, ask yourself why strlcpy() still isn't part of glibc..

        There's a few huge winding threads in libc-alpha <http://sources.redhat.com/ml/libc-alpha> on this. One answer is:

        These words make sense. The problem with strlcat and strlcpy is that they
        assume that it's okay to arbitrarily discard data for the sake of preventing a
        buffer overflow. The buffer overflow may be prevented, but because data may
        have been discarded, the program is still incorrect. This is roughly analogous
        to clamping floating point overflow to DBL_MAX and merrily continuing
        in the calculation. ;)


        Agree or disagree, the developers of glibc don't find strlcpy to be an appropriate function based on its merits. Trying to claim otherwise is just trying to stir up trouble.

        • by ^BR ( 37824 )

          Since strncpy() does exactly the same thing, just don't bothering always NUL terminating the resulting string.

          Data discarding can be detected by checking return values, you can't do much against people not checking the result of their call. The question is, what API is the less troubling ? strncpy() or strlcpy() ?

        • The problem with strlcat and strlcpy is that they assume that it's okay to arbitrarily discard data for the sake of preventing a
          buffer overflow.


          A function should always throw out data that doesn't match its parameters. If a function expects an int and the user passes a double, it gets changed back to an int. The user's data gets lost, but thats his fault for using the program incorrectly. Every C compiler known to man behaves this way. Why should strings be any different?

          • > A function should always throw out data that doesn't match its parameters.

            No, it should signal an exceptional condition. Checking the return value of strlcpy or strncpy for "actual bytes written" means checking it against strlen ... scanning the string just to get the length. If I could simply get a return value that indicated success or failure, that would be infinitely preferable in my codew. Not that C has strings anyway, it just has some array hacks to deal with moving around what amounts to void* chunks, and not even efficiently at that.
    • Try the Secure Programming for Linux and Unix HOWTO [dwheeler.com]


      It explains the basics of secure programming and common problems with a variety of programming languages including buffer overflow and many more tricky problems.

  • by vlad_petric ( 94134 ) on Sunday February 24, 2002 @08:26PM (#3062779) Homepage
    ... is definitely neither security nor bugs - it's popularity/acceptance. To sustain my claim, there is no OpenBSD entry in the top requested websites [netcraft.com]

    What's the point of a rock-solid operating system if very few are actually using it (and of course, that happens because of lacking features)? For a server security is always the second issue - the first being the service provided.

    (I'm definitely exagerating here, so flame me as you like)

    The Raven.

    • You may not find that millions of web servers are running on OpenBSD, but if there were some way to keep track of how many of them are protected by firewalls/routers running OpenBSD, the numbers would probably be more impressive.
      I, for one, find that the "secure by default" policy is incredibly convenient for a drop-in firewall solution (and I've done this a few times for various companies).
    • by zulux ( 112259 ) on Sunday February 24, 2002 @08:38PM (#3062824) Homepage Journal
      What's the point of a rock-solid operating system if very few are actually using it

      OpenBSD will never show up on my networks - but every packet that gets to my FreeBSD webserves goes through an OpenBSD firewall. I imagine that a lot of packet are touched by OpenBSD - an we'll never know it.

  • Secure programming (Score:4, Informative)

    by Kiwi ( 5214 ) on Sunday February 24, 2002 @08:28PM (#3062783) Homepage Journal
    One of the most common security problems is buffer overflows; the way I worked around this was to write a special string library where the strings had meta data; including the maxiumum length a given string could have.

    One of the problems with secure programming is the inertia in the computer industry; most of the operating systems in widespread use today (The *nix clones and DOS derivitives, these days) we developed in a time when security did not matter; *nix has a crude root-or-not security model and MS-DOS has no conception of security at all.

    Personally, I think the solution is a model which has a real security model, such as EROS [eros-os.org]. The "audit the code so that it is perfect code without bugs" approach to security does not always work [monkey.org], even with OpenBSD.

    - Sam

    • One of the most common security problems is buffer overflows; the way I worked around this was to write a special string library where the strings had meta data; including the maxiumum length a given string could have.

      OK, great. But how can we be assured that your string library doesn't have security problems? Somewhere, someplace, memory is getting allocated and bytes are getting written, string copies are being performed, and buffers await overrunning. Auditing code so it is perfect and without bugs does work for security, it just has to take place in the libraries rather than the applications.
      • by rgmoore ( 133276 )
        Auditing code so it is perfect and without bugs does work for security, it just has to take place in the libraries rather than the applications.

        But it doesn't really work. It's better than nothing, but if there's one thing that years of security bugs should have taught us, it's that there are always new classes of undiscovered bugs out there. You can eliminate every known bug, but that doesn't guarantee that there are no clever exploits that you haven't figured out but that somebody else has (or will) find out how to use. What you really need is a level of security that's orthogonal to code level security. That can be something like capabilities, mandatory access controls, or even just finer grained control over what the code is allowed to do than Unix's all or nothing model.

        Right now, if I want my computer's CD burning software to be able to set itself at high priority to avoid buffer underruns, I have to run it SUID root. That's insane; it means that a single programming error in what reasonably should be a user accessable program could give somebody complete access to the system. That isn't security, it's a nightmare. We need a system where I can assign that program only the right to reset its own priority, and not complete run of the system. Yes, it's better if the code is audited and potential bugs are eliminated, but a system in which a single bug can completely compromise the whole system is badly designed.

        • That's a problem with administration skills. See, Unix has the concept of "groups" -- you can grant some priveleges to some users but not others. Now, the linux kernel doesn't really offer very fine grained device control, but that's an implementation problem, not a design problem.
    • by glenmark ( 446320 )
      ...or take the approach taken by OpenVMS from the beginning: any time a system call needs a string, that string is passed by descriptor. Of course, when the programmer is sloppy and uses null-terminated strings for his own calls, a buffer overflow in OpenVMS would only crash the program. Overflowing data would be discarded rather than executed. It boggles my mind that this flaw in Unix still has not been corrected after all these years.
  • by SteelX ( 32194 ) on Sunday February 24, 2002 @08:43PM (#3062853)
    While we're on this topic, this Secure Programming HOWTO [dwheeler.com] for Linux and UNIX might be of interest. It's a pretty comprehensive book. And best of all, it's free! :-)
  • Presentation... (Score:5, Insightful)

    by sean23007 ( 143364 ) on Sunday February 24, 2002 @08:54PM (#3062899) Homepage Journal
    If this had been converted from presentation-style to an actual webpage, it would have been deemed a big waste of time. Where is all the information? There isn't even anything new here, I already knew everything there, and I've only been using OpenBSD for a couple weeks.

    The only thing there was a long list of titles with no information, old or new.
  • by Jucius Maximus ( 229128 ) on Sunday February 24, 2002 @08:57PM (#3062909) Journal
    Why is it that when MSFT does something like stopping to fix bugs and secure systems [slashdot.org], we make fun of them, but if it's BSD we look at it as something we can learn from?
    • Re:Fix the bugs? (Score:3, Insightful)

      by Malcontent ( 40834 )
      Because we area able to learn from the openbsd team. Their goal is to help everybody build more secure systems. MS security practice takes place behind closed doors and can not help anybody else.

      In the end we will see if MS is able to actually execute their goal. OpenBSD already has.

    • "Why is it that when MSFT does something like stopping to fix bugs and secure systems, we make fun of them, but if it's BSD we look at it as something we can learn from? "

      M$ doesn't generally try to fix bugs as much as they try to fix it so that there is the perception that they try to fix bugs. In the end, they are perfectly content to sell highly exploitable systems so long as the ignorant masses will buy them. Witness XP ... should they pull it off the market? absolutely. Will they? Why recall Fords with Firestone tires when people are still buying them and they have agreed that if the tires explode and they die there is no liability on the part of those pocketing the pretty polly?
  • by LoonXTall ( 169249 ) <loonxtall@hotmail.com> on Sunday February 24, 2002 @09:38PM (#3063034) Homepage
    I'm a CS major, and we just got some sample code from the professor to help us on our first project. The very first thing it does in main is have a buffer overflow.

    #define SZ 100;
    char buf[SZ];
    cout << "Enter courses filename: ";
    cin >> buf; // BAM!!


    This is C++! We have the string datatype for this! There's absolutely no excuse for this--especially in code that will be referenced as "good" code by everyone else in the class.

    So anyway, the point of this rant is that security will remain horrible until we start teaching people to write securely in the first place.
    • It's a class. OK, a class is a sort of type, but it's not an intrinsic type.

      That said, yeah, he should use cin.getline().

      Hey, at least he used #define to set the array size. Wait until you get hit with a 100,000 line program to modify where the author didn't use #define...

    • by Kidder ( 33176 ) on Sunday February 24, 2002 @11:22PM (#3063340) Homepage
      A professor's code is not necessarily the best code in the world. I had a professor who used gets() in the example code he gave us and I had to explain the difference between fork() and vfork() to him (well, not much of a difference anymore...) I had another professor whose code had a MAJOR memory leak in it. I politely emailed the professor about it and he replied to the entire class with the memorable phrase: memory leaks are not important anymore.
    • So anyway, the point of this rant is that security will remain horrible until we start teaching people to write securely in the first place.

      That isn't the prof's responsibility. He (she?) is a computer scientist, not a software engineer, and certainly not a security wacko. The relationship between computer science and software engineering is kind of like the relationship between physics and mechanical engineering -- the scientists create the knowledge and the engineers put it to use. You can't expect a physicist to design a perfect bridge, any more than you can expect a computer scientist to write secure code. It isn't what we do, and we really don't care about it. Computer science is really more about mathematics than programming; if you want to learn good design practices, take a software engineering course.

      • The problem is that most univerities out there still only have a CS program, not a SE program. I've been ranting on this topic for at least a dozen years or so.

        The head of the CS department of my old college is a friend of my Father-in-law, and they don't see the problem - which is why they keep producing people with CS degress, and they can't work in the real world
      • The distinction between CS and SE is not (unfortunately) as clear-cut as that. The University of Toronto, for instance, has both Computer Science and Computer Engineering (which, of course, includes software), and the two are certainly not as distinct as physics and mechanical engineering.

        Software is a strange beast. There is nothing else so abstract and yet so directly practical. It defies analogy with other fields.
      • That isn't the prof's responsibility.

        I'd like to see you say that to Bruce Schneier.

        Security sucks IRL. Handing people insecure code that they assume is correct is not the way to fix it. If it is not the responsibility of the person writing the code to make it secure (at least against coding errors like string formatting and buffer overflows), whose is it?

      • But this is RUDIMENTARY sh-t we're talking about here. If I can't trust a professor to understand why buffer overruns and memory leaks are undesirable, how am I supposed to trust anything that the professor says?
    • How big can it be?
      What happens when it *is* bigger than expected?
      32767 + 1 yields -32768
      99 + 1 gives 00 (or :0 or ...)
      What happens if the string is bigger than the string datatype can hold?
      At least the prof put the critical assumption up front.
  • by Tom7 ( 102298 ) on Sunday February 24, 2002 @10:42PM (#3063214) Homepage Journal

    I can't believe there is not one mention of using a language other than C. Is it the systems community? Is it because of BSD's history?

    I don't know why this idea fails to even come up. Network servers are bandwidth-limited, not cpu limited, and writing them in a safe high level language is not only easier, but makes buffer overflows impossible. Being easier to write also of course allows more time for optimization and for other security fixes. (For those that need really high-performance for their gigabit links, maybe a C version and very careful maintenance is possible. For home users, this prospect is ridiculous.)

    C seems almost *designed* to allow for buffer overflow exploits. If we want secure programs, we should be starting from more secure foundations!

    For more detail, check my previous rant, "C lang remains inappropriate for network daemons": http://slashdot.org/comments.pl?sid=24271&cid=2629 013 [slashdot.org]

    • A language is only as good as its compiler. I remember reading an article (at the register maybe?) about a microsoft security product that had buffer overflows not because of the original code, but from the code that the compiler generated. C, not being a very high level language, is easier to write compilers for. It is easy to audit and verify. It is what most system programmers cut their teeth on. All of those reasons (and many more) make it an ideal OS language. Yes, it has its problems, but at least you know that a buffer overflow is your own damn fault if you write it. And with a little knowledge and forethought they can be easily avoided.
      • I agree with you in theory, but in reality we have seen very few compiler flaws and very many application flaws. Writing compilers for high-level languages isn't all that hard, anyway.

        I disagree that buffer overflows can be easily avoided. If they are so easy to avoid, why do we continue to see them? Practically every popular network software written by anyone has fallen to a buffer overflow at one time. I also disagree that C code is easy to audit and verify. Wu_FTPD is over 24,000 lines long, and I can't imagine ever trying to think through the security of such a large system on pure willpower. Safe languages give you the benefit of computer checking, and this frees up your mind to think about more important things (such as the security problems that compilers can't check!)
      • Somehow I knew someone would say something like this. Which do you think would be easier: making sure that a compiler generates safe code, or making sure that no buffer overflows, memory leaks, or any other such things are in ANY programs or libraries. Why not fix things once rather than having to fix them everywhere?
    • Network servers are bandwidth-limited, not cpu limited

      Not to detract from your main point, which is a good one and well made, but that particular statement is pretty dubious. Some network servers are bandwidth-bound, some are CPU-bound, some are memory-bound or disk-bound, some are crappy-API-bound, some are bound by complex synchronization/serialization requirements. Most are affected by more than one of these limitations, and by the tradeoffs that must be made between them.

      None of this refutes your argument that C is not the best language for servers. Its lacks of type-safety, range/bounds checks, proper overflow handling (which requires exceptions), garbage collection and so on are all well known. Java is a much better language in these regards while still remaining fairly familiar, and even for completely CPU-bound programs there's a compelling argument for HotSpot-style JIT as an alternative to traditional compilation. If only Java supported true MI instead of the inadequate "interface" hack/substitute (and I do understand how the requirements for code mobility made that a reasonable choice at the time). Other, more "exotic", languages such as those in the Scheme or ML categories might appeal to purists, but their chances of achieving widespread adoption will remain almost nil until the "impedance mismatch" with declaratively-oriented system programming interfaces is lessened.

      • > Not to detract from your main point, which is a
        > good one and well made, but that particular
        > statement is pretty dubious.

        OK, you're right. I guess I what I really meant to say is, "For the home user with a powerful computer and relatively small bandwidth, CPU performance of network applications is nowhere near as important as security is."

        I still think SML or O'Caml could take off with hakers, but I'm still trying to figure out a way to convince them to try something new. It's really not that hard, and the interfaces to system libraries are actually rather straightforward.
  • by redelm ( 54142 ) on Sunday February 24, 2002 @10:43PM (#3063215) Homepage
    Granted good programming practice [fixed length buffers] is the best solution. But while waiting for code clean-up, wouldn't it be a fairly simple fix to wrapper the offending variable-length libc calls with fixed length calls? The problem is how long a length.

    For x86 with standard stackframe setup, there is an answer: length _MUST_ be less than (EBP - *ptr) if the stack isn't to be trashed. Note that other local data may well get trashed. But at least the pgm doesn't lose control.

    The wrapper could drop early chars or trailing chars, but should signal an error in the unlikely event the code has been made with error trapping. Of course, this wouldn't work if the code was compiled with -fomit-frame-pointer [or equivalent], but there is a price for security.

  • by Anonymous Coward
    ...is that C was intended to be a systems language used by experts. Instead it's being used to create networked systems, and very often by very non-expert coders.

    If you want to make sure people don't make a particular mistake, make it impossible for them to do so. That means you either 1) fix C to eliminate all buffer overflow issues (impossible, IMO), 2) enforce proper coding technique, possibly through a special string library and/or macros (very difficult on a project as large as an OS), or ditch C completely (virtually impossible given the size of the Linux code base).

  • by Otis_INF ( 130595 ) on Monday February 25, 2002 @04:06AM (#3063996) Homepage
    The reason for all this bufferoverflow crap is that in C, and thus also in C++, people tend to use arrays or blocks of allocated memory to represent strings. What's needed is a string datatype IN the language, like int and char. Then, the compiler can do as the CLR does: allocate the strings, even local scope ones, on the heap. This way, no buffer overflows can happen, since the type is in fact a black box, so the overflow will cause some kind of error, plus the overflow can't be used to modify the stackframe and thus the returnaddress, since the string variable isn't allocated on the stack.

    In C++, there is the string class in the std lib, but it's not native to the language. (almost native ok, but not totally like in C#).

    C is a language where the respect for the borders of a block of memory is in the hands of the developer. Clearly, that's too old fashioned today, since languageelements can prevent mistakes C allows developers to make.
  • Not to flame, but

    "Four years without a remote hole in the default install!"

    is nothing compared to MS-DOS's twenty year safety track record. That, and thousands of "potential" buffer overflows in realistically safe code like this:

    int SomeFunc ()
    {
    char foo[5] = "Hello";
    OtherFunc(foo);
    }

    OtherFunc(char * foo)
    {
    /* this is only ever called from SomeFunc(),
    * whic passes a string literal. This is, of
    * course, completely undocumented. You never
    * read this comment.
    */

    char * bar = malloc(strlen(foo)+1);
    strcpy(bar, foo);
    }

    Yes, OpenBSD is a very nice OS, but no, it isn't a magic bullet.

"If it ain't broke, don't fix it." - Bert Lantz

Working...