7.11.14

Running with pointers II

(This is the second in an occasional series of explorations of some of the stranger areas of C++ syntax.)

Let's assume that we're all familiar with the RAII idiom (which is basically a fancy way to say "using destructors to manage resources correctly"). So let's consider the following listing, and ask ourselves: what output might it produce, and "Is it safe?"

#include <stdio.h> // for printf()
#include <memory> // for std::unique_ptr<>

struct Lock {
    void lock() {
        printf("Lock::lock()\n");
        data = std::unique_ptr<int>(new int(23));
    }

    void unlock() {
        printf("Lock::unlock()\n");
        data.release();
    }

protected:
    // valid during lock: using unique_ptr for safety
    std::unique_ptr<int> data;
};

// RAII for safety
struct AutoLock {
    AutoLock(Lock &l): l(l) { l.lock(); }
    ~AutoLock() { l.unlock(); }

    Lock &l;
};

struct SafeCode: Lock {
    void safe() {
        AutoLock(*this);
        // we can access the data now...
        printf("My data is safe here!\n");
        printf("Locked data = %i\n", *data);
    }
};

int main() {
    SafeCode s;
    s.safe();
    return 0;
}

We have a type called Lock that has lock() and unlock() methods. These methods guard a resource which is only valid when the lock is locked. (For our purposes, the 'resource' we're managing is the number 23, and it is accessible directly through a member variable—neither of these are particularly defensible design decisions, but can be ignored.) We're even using a fancy C++11 unique_ptr to manage our resource to make sure we don't get burned by raw pointer errors (which might be ironic if it wasn't a terrible analogy). AutoLock wraps the lock's API in its constructor and destructor, which should ensure that Lock::data should be valid for the lifetime of an AutoLock object.

The type SafeCode inherits from Lock and provides a method called safe() which demonstrates the usage of AutoLock, which ought to be safe, right? It's right there in the name of the function.
So we'd expect to see this:

Lock::lock()
My data is safe here!
Locked data = 23
Lock::unlock()

SPOILERS: No, it's not safe. What I actually see, compiling with gcc -std=c++11 on Ubuntu 14.04 is this:

Lock::lock()
Lock::unlock()
My data is safe here!
Segmentation fault (core dumped)

Ouch. So what went wrong? The problem is actually with the definition of the AutoLock object at the start of safe(). We forgot to give it a name, so it is immediately destroyed after its creation, and more importantly, before we try to access the data. If we give it a name (so AutoLock lock(*this); or even AutoLock _(*this); would suffice) it will survive until the end of the scope it's contained in (i.e. the end of the function safe()).

We'll need to go digging in the standard to find out more. Section 12.2 (Temporary objects) contains verbiage which would indicate that the anonymous AutoLock instance is "a temporary whose lifetime is not extended by being bound to a reference". Because it has no name, it cannot possibly be referenced again after its introduction, so the compiler is justified in placing the call to its destructor immediately after the object is constructed. (We will leave aside the question of why we are allowed to create an anonymous instance like this, and if this is ever legitimate—please post a comment if you have a situation where defining an anonymous temporary variable like this is a valid and useful technique.)

The use of unique_ptr<> is pretty much a red herring, I just threw that in to dissuade critiquing which might've resulted from int *data; in the definition of Lock. (And, I suppose, smart pointers are now standard, so we should use them given the option. Unless, I also suppose, you really are concerned about performance, and you have a profile trace demonstrating that the use of a smart pointer is your performance bottleneck. But this is a lot of parenthetical supposition.) One thing I will say though, is that at least unique_ptr causes the broken code to crash—with a raw pointer, the program happily reported that the value of data was zero, and continued to run to completion. Although given the nature of the bug, we could assume that literally anything could happen.

"Forget it Jake, it's undefined behaviour..."

30.10.14

Trigonometry in awk

I've been doing a lot of work with OpenGL recently, attempting to get some glBegin()/glEnd()-era 'rendering' code into some kind of shape where I can port it from desktop OpenGL to mobile and web platforms running OpenGL ES or WebGL. This basically meant throwing the existing code into a well, burning it, learning how to OpenGL properly and starting again, but that's all going quite well—I've even started writing shaders! 1

This post, however, is not about that. Having got to a point where I have a basic 2D GL3.0/GLES2.0 framework in place, I wanted to actually draw something. Triangles are almost too easy to draw, and squares aren't much harder (you can either use glDrawArrays(GL_TRIANGLE_STRIP) or glDrawElements(GL_TRIANGLES)).

So, with mastery of 3 and 4 vertex shapes, I wanted to move on to the next challenge, which is obviously... a pentagon. But how to figure out the co-ordinates? The math behind regular polygons is pretty straightforward, so we just need to take sines and cosines of some angles. I thought that jumping into C++ for this was a bit over the top, and I wasn't in the mood to mess around with any 'batteries-included' interpreters. I just want something quick, light and iterative—agile, if you will. I wonder whether awk has any numeric capabilities...

Spoiler: It does!

#!/usr/bin/awk -f
BEGIN {
    PI = 3.141592654

    if("" == (SIDES = ARGV[1])) {
        SIDES = 5
    }
    if("" == (COMMENT = ARGV[2])) {
        COMMENT = "#"
    }

    print COMMENT " " SIDES "-sided regular polygon..."

    print COMMENT " vertices"
    for(n = 1; n <= SIDES; n++) {
        A = 2 * PI * n / SIDES
        printf "%.2f, %.2f,\n", cos(A), sin(A)
    }
    print COMMENT " texcoords"
    for(n = 1; n <= SIDES; n++) {
        A = 2 * PI * n / SIDES
        printf "%.2f, %.2f,\n", (.5 + cos(A)/2), (.5 + sin(A)/2)
    }
}

One of the niceties of awk is that there's no operator for string concatenation, you just glom string variables and constants together next to each other and it 'just works', which makes print statements a lot less noisy than most other languages. Another nice thing is that printf is available, and works exactly as you'd expect it would. Other than that, we only really need sin(), cos() and the ability to loop, and we're finished.

When run, this spits out data for vertex coordinates (ranging -1.0 to 1.0) and texture coordinates (ranging 0.0 to 1.0) for each point around the edge of a regular polygon (defaulting to a pentagon if no parameters are given).

$ ./poly.awk
# 5-sided regular polygon...
# vertices
0.31, 0.95,
-0.81, 0.59,
-0.81, -0.59,
0.31, -0.95,
1.00, 0.00,
# texcoords
0.65, 0.98,
0.10, 0.79,
0.10, 0.21,
0.65, 0.02,
1.00, 0.50,

It can generate coordinates for any number of sides, and there's even an optional parameter to change the comment syntax, so you can just copy and paste the output into the vertex array literal of your language of choice.

$ ./poly.awk 3 //
// 3-sided regular polygon...
// vertices
-0.50, 0.87,
-0.50, -0.87,
1.00, 0.00,
// texcoords
0.25, 0.93,
0.25, 0.07,
1.00, 0.50,
$ ./poly.awk 6 --
-- 6-sided regular polygon...
-- vertices
0.50, 0.87,
-0.50, 0.87,
-1.00, -0.00,
-0.50, -0.87,
0.50, -0.87,
1.00, 0.00,
-- texcoords
0.75, 0.93,
0.25, 0.93,
0.00, 0.50,
0.25, 0.07,
0.75, 0.07,
1.00, 0.50,

awk's execution model is geared towards reading files line-by-line, extracting patterns and processing them, so this isn't really playing to its strengths. But nevertheless, treating it like a high-level, loosely typed version of C, I was able to get from idea to implementation to refinement in about 10 minutes. (And then I was able to render a coloured, shaded OpenGL pentagon and it made me happy. I am easily pleased.) I think awk is a useful tool to get to know, seeing as it's almost certainly already installed on your machine2.

The Android NDK goes as far as to use awk to build a full XML parser (of sorts), which is entertaining, if not a little bit bonkers.

Here's a terrible-looking graphic of the Emscripten-powered pentagon!



1 I should probably write about cross-platform shader development at some point, it's all manner of fun!
2 Not you, Windows user! You'll have to make do with PowerShell.

10.9.14

Setting up a fossil server

When I'm working on personal projects, I've usually used a pen and paper to keep track of design issues, bugs to be fixed and so on, but I've recently found myself wanting an actual issue tracking database. My requirements are actually pretty straightforward:
  • Must be accessible from a browser
  • No PHP
  • Easy to set up
  • Use SQLite as a database backend if possible (see previous point)
I did some searching and reading around, and I thought about Apache Bloodhound, but I was scared off by the installation instructions. I also briefly looked at Trac which Bloodhound is apparently a fork of, but again, a baffling and intimidating installation procedure left me cold.
Next up Bugzilla, which I spent about 20mins trying to set up, installed a lot of Perl modules, and got some incomprehensible (and even worse, un-google-able) timezone error, so I didn't even get as far as trying to set up an instance of Apache webserver, which I wasn't expecting to enjoy anyway. At least with this one, I tried!
During my earlier research, I had read about fossil, as if you type issue tracker SQLite into a search engine of your choice, fossil will be on the first page of results. This is because it and SQLite are both written by the same person. I'd disregarded fossil as its primary purpose seems to be as a DVCS that happens to have bug tracking (and a wiki!) as additional features on the side. However, it's very very easy to set up a fossil server from a standing start. So very easy, in fact that I managed to get it up and running in under ten minutes, on a server running Debian 6:
After sshing into the box, I installed the distribution version of fossil with apt-get, and created a new user called fossil (because I am terrible at naming things, it turns out).
$ su
# apt-get install fossil
# adduser fossil
Follow the adduser prompts with with password and other information as desired here…
Now, as the fossil user, create a new database in the home directory (called bugs.db—names; terrible).
# su fossil
$ fossil new ~/bugs.db
Make a note of the username/password for your admin account of the fossil site in the output from this command (the username is probably fossil if you specified that to adduser), as you won't be able to do anything useful to the server without admin access. At this point, we could start the server running, but we want to configure it to start up when the machine starts up, so we need to go back to root and edit rc.local:
$ exit
# vim /etc/rc.local
Add this line to the end:
su fossil -c 'fossil server ~/bugs.db'
That's literally all you need to do. Reboot the server and boom, the server should be listening on port 8080 when the server comes back up, and you can log in using the username and password noted earlier. If you want to be all fancy and route HTTP traffic from port 80 over to 8080, you can add another line to rc.local, just before you kick off the fossil server:
iptables -t nat -A PREROUTING -i venet0 -p tcp --dport 80 -j REDIRECT --to-port 8080
If you're anything like me, you'll probably want to spend the next hour or so tweaking the site's CSS and HTML templates and making it your own.
Fossil is a triumph of minimalism—it is literally one single executable file and one database file. It contains a DVCS, wiki, issue tracking and source control browser with a web interface. I'm primarily only interested in the bug tracking for the time being, as at the moment I still want to keep my source code in SVN. But, now that I have a fossil repository available to me, I might want to import my history and give myself the option to work offline, git-style. Another possibility is that I can use the fossil repository to hold releases of my projects, and I can accept and version control patches and bugfixes for each release. Having a (bare-bones) wiki up and running is also nice.
The moral of this story is that from an installation perspective, fossil definitely beats out the competition here. It's not as configurable or as extensible as other solutions, but that doesn't really matter to me if I can't get the other solutions to run in the first place!

25.8.13

Type tagging and SFINAE in C++

While it may sound like an onomatopoeia for somebody sneezing, SFINAE is a C++ idiom, standing for 'Substitution Failure Is Not An Error'. The idea is that when instantiating a template, if more than one instantiation is viable, then any other instantiations which would cause an error are not considered. As long as there is one valid instantiation, that instantiation will be used. In other words, the fact that some substitutions may fail is not enough to cause a(n) compilation error. A quick example to demonstrate this—let's assume we have declared the following template which expects to operate on types which contain an embedded type called SomeType:
template<typename T>struct Example {

    typename T::SomeType t;
};

struct Ok { typedef int SomeType; };

Example<Ok> ok; // perfectly fine
It should be fairly uncontroversial to point out that is not going to work with native types, such as int:
Example<int> i; // not so fine
But, if we were to provide an int-compatible instantiation, then the presence of the default instantiation isn't going to interfere with the use of the overridden version:
template<>struct Example<int> {
    int t;
};
Example<int> i; // fine now, default Example template is no longer considered.
This selection process can be used to choose programmatically between different template instantiations, based on the presence or absence of an embedded type (i.e. a tag type) in a type declaration. The syntax used to define these kinds of template mechanisms can often be somewhat opaque, so I devised a mechanism which conveniently wraps the type detection mechanism into a single macro, called TYPE_CHECK(). An example usage would be something like this:
TYPE_CHECK(Test1Check, T, TypeToCheck,
    static const bool VALUE = true, // Test1Check body when T::TypeToCheck exists
    static const bool VALUE = false); // Test1Check body default
This defines a template type called Test1Check<T>, containing a boolean constant VALUE which is true for any T where T::TypeToCheck exists, or false if it doesn't, so, in the following example, we would see output of "0, 1" from printf():
struct TestingF {};
struct TestingT { typedef void TypeToCheck; };

printf("%u, %u\n",  // prints "0, 1"
    Test1Check<TestingF>::VALUE,
    Test1Check<TestingT>::VALUE);
TYPE_CHECK() takes 5 arguments: the first and second are the name of the check type (Test1Check), and the type parameter (usually but not necessarily T). The macro will expand into a template struct definition (template<typename T>struct Test1Check { /*...*/ }; in this case). The third parameter is the name of the type we want to test for (i.e. the presence or absence of T::TypeToCheck), and the fourth and fifth parameters represent the body of this struct (the /*...*/ part) if the test type is present, or the default in the case it's not present.
We could rewrite our initial Example given above as follows, although it should now work for any type without an embedded T::SomeType, and not just int:
TYPE_CHECK(Example, T, SomeType,
    typename T::SomeType t,
    T t);
You can also use TYPE_CHECK() to embed functions into the check type, so that your program can operate differently depending on if the test type is present or not. You can use this to implement some fairly primitive compile-time reflection mechanisms.
One additional refinement worth mentioning is that if you have a compiler which supports C99-style variadic macros, it's possible to parenthesize the fourth and fifth arguments, which is occasionally useful if they need to contain commas—an example of this is in the test code provided below.
There's one additional macro called TYPE_CHECK_FRIEND(). It takes the name of a check defined by TYPE_CHECK() and this can be placed inside the body of a type if you want to give the check access to the internals of a type. Again, there's an example of this in the test code.
The TYPE_CHECK() implementation lives in a single header file, nominally called "type_check.h", which can be copied from here. You should be able to just paste it to a local file and start using it. It contains the two macros outlined above, and a few implementation details (anything in the namespace tc_ or starting with a tc_ prefix), which you can ignore. If you're using a compiler which doesn't support variadic macros, you should #define TYPE_CHECK_NO_VA_ARGS before #including it.
A simple 'test suite' can be copied from here, which shows a few different ways that this kind of mechanism can be used. As far as I'm concerned, this code is public domain, so feel free to do whatever you'd like with it.

Addendum 8.3.14:

I just noticed that my source code (which was hosted on hastebin.com) is no longer available there, so I've pushed the files to Dropbox instead, where hopefully they will remain accessible for the forseeable future. It should make it easier for me to publish updates as well - which is for the best as running the code through ideone reveals an issue with the TYPE_CHECK_FRIEND() macro in GCC 4.8.1, but 4.3.2 seems happy enough with it.

17.6.13

On casting the result of malloc()

It my be that a good way to drive traffic to your (relatively) new blog would be to find a contentious but ultimately minor technical argument and take sides, and so without further ado:
uint8_t *p = (uint8_t*)malloc(n);
There's a large amount of debate around casting the result of malloc(), and we're going to examine whether or not it's necessary. (The short answer is that it isn't, but we will explore why in more detail.) There are three main scenarios in which a cast of malloc() could be used, either in C or in C++, or in what we'll refer to as "C/C++" (a misguided attempt to write in both languages at once).

In C

This is fairly straightforward - the C standard (as of C89, at least) specifies that any void pointer can be implicitly converted to any other pointer type, so any cast would be unnecessary, so therefore we shouldn't add unnecessary casts to our code because casts are bad, so we should write the following:
uint8_t *p = malloc(n);
We can go slightly further than this if we want to allocate an instance of a specific type, rather than a buffer of arbitrary size, and we should phrase the call to malloc() thus:
Type *p = malloc(sizeof(*p));
In this case, the compiler can calculate the size we want to allocate for the object from the dereferenced pointer type. Some people would have you phrase that as:
Type *p = (Type*)malloc(sizeof(Type));
Which manages to be ugly, repetitive and fragile, mentioning the name of Type three times, where once would suffice. We should not listen to these people.
Another argument against casting in C is that if you've neglected to #include <stdlib.h>, then you would get a warning about a cast from int to a pointer type. This would be due to the compiler assuming that malloc() returns an int as it hasn't seen a prototype. This is technically true, but I would think that if you've neglected to include system header files, you'd have to be very unlucky if the worst outcome was getting a single warning (i.e. you will most likely have larger problems). And it seems that recent versions of GCC will give you a warning ("incompatible implicit declaration of built-in function ‘malloc’") if <stdlib.h> is missing, whether you cast the result of malloc() or not.

In C++

The argument in C++ is also fairly straightforward—while implicit casts of void pointers are verboten, there is really no need to use malloc() at all in C++, where new[] exists, and is much more typesafe:
uint8_t *p = new uint8_t[n];
And in the case of allocating an instance of a type we could write something like this (which will also allow you to pass arguments to the Type constructor):
Type *p = new Type(a, b, c);
In some limited circumstances, you may want to allocate memory for an object in an an unusual way, but you can still use a placement new on a void pointer in this kind of situation:
void *p = memalign(64, sizeof(Type));
Type *t = new(p) Type(a, b, c); // no casting required

In "C/C++"

One remaining argument which might be raised is that you'd like to write code which can be compiled with both a C compiler and a C++ compiler (simultaneously, perhaps?). In this case, people will try to convince you that you'd need to use malloc() for C compatibility, and you'll need to cast its result for C++ compatibility, so in this specific case, you really have no choice but to write:
uint8_t *p = (uint8_t*)malloc(n);
And these people are wrong, for two reasons. Firstly, if I genuinely need code which compiles to both languages, I'm going to use the preprocessor so I can work with the union of the idioms of both languages, rather than the intersection:
#ifdef __cplusplus
#define MY_MALLOC(type_, size_) static_cast<type_>(malloc(size_)) // ...or even "new type_[size_]"
#else//__cplusplus
#define MY_MALLOC(type_, size_) malloc(size_)
#endif//__cplusplus

//later...
uint8_t *p = MY_MALLOC(uint8_t, n);
But (secondly) there are very few reasons to do this kind of thing anyway - if you have some C code, just compile it with a C compiler and link it against your C++ application, possibly with some judicious use of extern "C" here and there.
So, in summary, there are no situations where it is necessary to cast the result of malloc()—it is at best redundant, and at worst actively detrimental to your code's quality.

27.5.13

Colons in make targets

I learned something interesting about GNU make recently. It's possible to write rules for targets which contain colons (:). This doesn't work very well for filenames, even though Linux/UNIX filesystems could support it in theory—from the evidence on stackoverflow, it seems to break make's handing of dependencies internally.
But there is one potential situation where the colon could be of use, in pattern rules. Consider the following makefile1:
SOME_VAR:=some_value
OTHER_VAR=other_value

all: ; @echo "Just a vanilla rule to show the 'cut & paste'-friendly rule syntax."

show\:%: ; @echo $(@:show:%=%)="$($(@:show:%=%))"
This creates a target pattern show:%, where % operates as a wildcard. Notice that we need to escape the colon in the target's definition, as an unescaped colon would be interpreted as part of the rule's target: deps syntax. However, when it comes to making substitution references, a colon can be used without needing to be escaped, despite being part of the syntax. (In fact, the substitution will actually fail if the colon is escaped in this case—this is probably due to this being a syntactical edge-case.)
The formulation $(@:show:%=%) in the rule's recipe takes the name of the target (e.g. show:something) and strips off the initial show:, leaving the rest of the target name as a result (e.g. something). We can then use this value as we'd use any data in make—in this case, we're using it to show the value of a makefile variable, which could be useful when debugging makefiles, as the examples show:
$ make show:SOME_VAR
SOME_VAR=some_value

$ make show:OTHER_VAR
OTHER_VAR=other_value
So we can see that this show: pattern rule handles both flavours of make variable (i.e. = and :=). It can even be used to inspect some of make's built-in special variables:
$ make show:MAKEFILE_LIST
MAKEFILE_LIST= makefile

$ make show:.FEATURES
.FEATURES=target-specific order-only second-expansion else-if archives jobserver check-symlink

$ make show:.VARIABLES
.VARIABLES=<D ?F DESKTOP_SESSION CWEAVE ?D @D XAUTHORITY GDMSESSION CURDIR SHELL RM CO _ [...]

$ make show:DESKTOP_SESSION
DESKTOP_SESSION=ubuntu
So now we have a fairly natural-looking syntax for building make targets which take a single variable 'parameter'. I can see other uses for this: a rule to write a version number or string into a header file, for example.
There's a minor refactoring which could be made: if the repetition of $(@:show:%=%) in the rule is unacceptably offensive, we can hoist out the substitution logic into its own variable (which needs to be a recursive (=) flavour), although we then have to use $(patsubst) to make the substitution work:
_showtarget=$(patsubst show:%,%,$@)
show\:%: ; @echo $(_showtarget)="$($(_showtarget))"
One final note—when I say make above, I specifically mean GNU make v3.81.
$ make -v
GNU Make 3.81
This trick might work in older versions of GNU make (and it could possibly break in future versions, but hopefully with enough publicity, it won't). I doubt it will work with any other variant of make. (But I believe the first rule of Makefiles, so other versions of make are irrelevant to me.)


1 Note that I'm using the inline rule syntax to work around the 'tabs' issue—using a semicolon after a rule's target, so the recipe does not need to be indented. This should allow you to copy the code out of the blog and have it work as intended when it's pasted into a makefile.

21.5.13

Running with pointers I

(This is the first in an occasional series of explorations of some of the stranger areas of C++ syntax.)

Consider the following code. What does it output? Will it run without crashing?
#include <stdio.h>

struct TypeA {
    TypeA() { printf("::TypeA() {}\n"); }
    ~TypeA() { printf("::~TypeA() {}\n"); }
};

struct TypeB {
    TypeB() { printf("::TypeB() {}\n"); }
    ~TypeB() { printf("::~TypeB() {}\n"); }
};

int main() {
    TypeA *a = new TypeA;
    TypeB *b = new TypeB;

    printf(" %p %p\n", a, b);

    delete a, b;
    delete (a, b); // I really want these deleted.

    return 0;
}
You may be relieved to learn that the program does actually release its resources correctly. But the two delete lines should probably be rewritten and the utterly misleading comment removed. Both a and b are deleted once each, although the appearance of the comma operator in the delete expressions introduces some confusion. In the first delete, delete a is evaluated first, then the expression as a whole evaluates to b. On the next line, the expression (a, b) is evaluated, and then the result of that (b) is deleted.
The output you'll see is something like the following (pointer values may settle in transit; dramatization, do not attempt, etc):
$ ./a.out
::TypeA() {}
::TypeB() {}
 0x10a2010 0x10a2030
::~TypeA() {}
::~TypeB() {}
Slightly alarming is that the code compiles without a peep using GCC's default settings (on Ubuntu 12.04.2):
$ g++ main.cpp
[ compiler says nothing... ]
Although we do get some (slightly cryptic) warnings when compiling with -Wall:
$ g++ main.cpp -Wall
main.cpp: In function ‘int main()’:
main.cpp:19:16: warning: right operand of comma operator has no effect [-Wunused-value]
main.cpp:20:16: warning: left operand of comma operator has no effect [-Wunused-value]
So, uh, -Wunused-value is your friend, I guess... (If you compile this on a different compiler/OS, let me know what results you get.)