Hacker Newsnew | past | comments | ask | show | jobs | submit | fruneau's commentslogin

Extensions are how new features come to an established language... C11 standard mostly standardised stuff that was already supported as extension by most compilers. Most of the time, extensions are created because features are missing from the language (alignement requirement for example), compiler could do a better job at optimising the code with better hints (noreturn, restrict, strict-aliasing, ...), or we could simply make the job of the developper a bit simpler/safer (nested functions, _Generic, blocks, ...).

IMHO, the main benefit lost by using bleeding-edge extensions is portability, which may or may not be an issue.


Three main reasons for this.

First, we originally chose the C language because it is much much simpler. One can master the language without too much pain, and you end up having much more control on what you are actually doing (there's little chance a line of code does not do what you read from it... while in C++ you may have hidden behavior behind even the simpler operation such a + or *).

Secondly, and this is probably a matter of taste, C++11's lambdas are just awfully designed. Their syntax overloads, with a totally different meaning, some tokens such as []. As in many situations, C++ design committee tends to chose the most complicate possible design, without taking readability into account (maybe conciseness is the main goal of their syntax choices?). On the other, the blocks syntax makes is very clear you are dealing with a function-like object with very similar syntax. The choice was made to have a clean and readable syntax.

Third point, C++11 just didn't exist in 2009. There were drafts but support from compilers was just nascent. RHEL in 2009 was at version 6 (very young release) which ships with GCC 4.4 (and GCC 4.7 as an experimental toolchain). The most common RHEL was version 5 with GCC 4.1. RHEL officially supports C++11 since RHEL 7 which ships with GCC 4.8 and was release on June 2014.


Some already tried: http://anzwix.com/a/FreeBSD/GccAddSupportForApplesBlockExten... http://gcc.1065356.n5.nabble.com/RFC-Apple-Blocks-extension-...

I'm not aware of any attempt that successfully reached the upstream. GCC could have simply imported Apple patches if there were no licensing issues.


Thanks. That said, I'm not sure the amount of effort is that huge: having a working rewriter was a matter of days, which makes it far less expensive than rewriting a codebase of several hundreds of thousands of C to a new language.


Languages such as Go, Rust or Swift aim to fill that gap, but in 2010 they were either nascent or simply inexistent.


Go doesn't embed at all, so right there that's out.

Rust is still very unstable, and I wouldn't recommend it for production use.

I'm not sure why you would use Swift unless you're targeting iOS, it seems like a terrible idea to base your code on proprietary languages without ridiculous, .NET levels of support. I also don't believe swift is easily embedded with its huge runtime ass.


I'm not sure that Swift belongs on that list. It was announced back in June and there still no compiler for any non-Apple platforms. It's safe to say, at this point, that portability isn't one of Swift's goals.


D? why does everybody hate/ignore D?


Interesting question. I'm not intimately familiar with D, but it does seem like it's designed as a saner alternative to C++. How many C++ alternatives are out there? From my memory D seemed like one of the first major attempts to get away from C++. Maybe they didn't go far enough to meet peoples expectations. Or maybe they were too far ahead of their time.

Anyway, when you said that the first thing that popped into my mind was Ewan McGregor going all "you were the chosen one" and actually I think that kind of describes my feeling about D. At first it sounded very exciting, but then it seemed like there really wasn't a payoff and I ended up feeling kind of bleh about it. I wonder how many other people feel similarly. Maybe D's problem is kind of an emotional one?


You are overthinking it.

D had a "version two" new compiler issue, and a "let's rewrite basic libraries" issue that split the user base.

They are also bad at marketing (and no match for Google's visibility -- they don't even have a maskot, IIRC)

And they still lack some basic things to this day, including a way to go from download to compiling your code in a minute.


Go: not that performant, unless you're writing network code.

Rust: not even 1.0 stable.

Swift: not cross platform.


There's a moment when you realise you should avoid allocations because you discover that allocations are expensive. Then you also realise at some point that you cannot avoid some allocations (e.g. you need to perform some non-blocking networking) and because of the first point, you get a bit frustrated by this: you're forced to do something that is expensive.

That's exactly when you realise that you can get rid of that frustration if you introduce some complex caching, object pools or more complex structures. But the more complex it gets, the more frustrated you become (again). Finally, you end up writing something simple and efficient: a memory allocator that matches the exact allocation pattern of your use case... and you want to allocator to be as efficient as possible.


Moreover, I think that some answers are not doing what we should expect from the subject. For example the "Detect Multiple Of Two" detects powers of two, not multiples.


Agree on needing comments.

My C++ is a bit rusty, but I think the code is in fact checking if the number is divisible by 2 (i.e. n % 2 == 0).

I think it's using the bitwise and operator (single &) to AND each bit in n and (n-1) and then checking if the least significant bit is 1 or 0. The code would need to shift (<< or >>) to check if the number were a power of 2.

I thought there was another bitwise NOT operator, not the !, but I think, and this is the part I'm hazy on after getting home, that the ! applied to an int is intended to flip each bit. Here's why:

------------

n = 6 = 110

n-1 = 5 = 101

n & (n-1) = 110 & 101 = 100

!(110) = 001 => true

------------

n = 5 = 101

n-1 = 100

n & (n-1) = 101 & 100 = 101

!(101) = 010 => false only if the least significant bit is used for logic decisions... this seems non-portable for some reason, just like using the ! as a bitwise NOT. It may actually be that ! only looks at the least significant bit, but I can't find c++ docs to say one way or another.

Whatever the code actually does, this is exactly the kind of example to use as a poster child for adding just a few comments.


The GP is correct, this is bit-twiddling code to test for a single bit being set in the int, and therefore being a power of 2, not simply a multiple of 2.

Broadly speaking, and ignoring edge cases, if N is a power of 2 then it has a single bit set. Subtracting one unsets that bit, so the AND of N and (N-1) is zero. On the other hand, if N is not a power of two then subtracting 1 leaves the top bit still set, so the AND is non-zero. Therefore taking the boolean NOT gives the right answer.

    00010000  Power of 2
    00001111  N-1
    --------
    00000000  -> 0

    00010110  Non power of 2
    00010101  N-1
    --------
    00010100  -> Non 0
Thus power of 2 is NOT( N & (N-1) ). Your examples are wrong because "!" is not bit-wise NOT, but boolean NOT. !0 = True, !N = False when N is non-zero.

It's actually wrong on the edge case of N=1, which is a power of 2.


Thanks, you're right. I don't know why I would have ever thought "!" was bitwise... only on a Saturday night.

It just goes to show that sleep is very important for interviews!


I do agree. That ASan part of the article starts with the following sentence: "The tradeoff however is that ASan won’t detect errors such as uses of uninitialized variables or leaks that memcheck can detect, but on the other hand it can detect more errors related to static or stack memory."

Only the runtime part is considered to be less feature-complete that valgrind (because the runtime part of ASan only keeps information about allocated memory, not about memory initialization). It is compile-time instrumentation that make it possible to detect stack and global oob.


I see, right.


There are two main issues with alloca: first you cannot deallocate or reallocate the memory, you just append more data to your frame. As a consequence, it is not suitable for dynamic allocations while the t_stack is.

The second drawback is that alloca allocates on the stack, as a consequence it is limited by the size of the stack (a few megabytes on recent linux distribution, and the actual size of remaining stack depends on the callstack, since each frame consumes some stack and may have put huge buffers/alloca on it already). The t_stack has no hard-limit.

Additionally, by being totally separated from the stack, the t_stack provides a flexible alternative to the stack: you have finer-grained control on allocation/deallocation patterns.

As said in another comment, the drawbacks of alloca are explained in the previous article of the series.


alloca has its drawbacks. See the previous article in the series: https://techtalk.intersec.com/2013/08/memory-part-3-managing...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: