Before anything else, I just would like to say that GSoC is awesome!  Sure, I still would’ve joined the free software movement without this program, but you have to admit that it is such a fantastic annual event for lots of different reasons that I, and many like me, couldn’t have jumped on the bandwagon this early if not for GSoC.

You see, I’m torn between academic pressures and conservative people who think that free software is either a cult hype or something that doesn’t make business sense.  It’s nice to just say that, “Yes, I am a free software developer and my salary is higher than yours!  Hah!”  Now, really, I can’t just say that because:

  1. I care for some of them,
  2. I don’t for most of them, and
  3. I am not a free software developer (yet).

So yeah, kudos to Google and the free software organizations who made this possible.  Especially to those people, Google employees or otherwise, who spent/are spending their time answering our stupid and (sometimes) paranoid questions on the mailing lists and IRC.  I need not mention your names, you know who you are.

GIMP’s Knight in Shining Armor: GEGL

It has always been known that GIMP is not feature-complete for *all* workflows and use cases.  (In fact, one could say that the same is true for most applications.  People just looove to complain, nevertheless. :p)  Some users get frustrated when they request for certain features only to discover that most of them are either planned (but aren’t implemented yet) or will never be implemented.

One such awaited feature is support for higher bit-depth images.  This and many other limitations have magnified GIMP’s somehow notorious reputation of not being suitable for certain workflows.  For certain other workflows, however, GIMP does pretty well.  I should know because I have used GIMP as a web-graphics artist before.

As an effort to address many of these limitations, the GIMP developers started hacking on a new graphics core that would support the desired features on the get-go.  Thereby, dodging ugly and misplaced refractorings that would otherwise make GIMP unstable.  To help with the latter purpose, the aforementioned graphics core, GEGL, was developed as a separate library.

Now, sometime around late 2006, GEGL was deemed stable enough to be actually usable.  Starting with GIMP 2.6, the developers started integrating GEGL into GIMP.

As I have said in the previous post, my GSoC has something to do with GIMP and its new graphics core, GEGL.  To expound on that, my work won’t really touch any parts of the GIMP’s sources.  Rather, I would be adding features to GEGL.  However, since GIMP will rely heavily on GEGL to do most of the graphics tasks, GIMP users will also experience the benefits of my project.

General-Purpose Computing on Graphics Processing Units

Otherwise known as GPGPU (don’t ask me what happened to the ‘c’ in computing, I don’t know :).  GPGPU is an ongoing trend to utilize the GPU (Graphics Processing Unit, found on modern video cards) for general computing.  This is an interesting notion because none of this utilization can be automated (well, not yet at least).  Meaning, that because the GPU is so specialized, you can’t just write code that runs on the CPU and expect the same code to run on the GPU as well.  Also, not all applications can be rewritten to use the GPU; only a subset of the programming problems will fit to GPGPU.

These are the reasons why applications that want to take advantage of GPGPU techniques must be rewritten manually to explicitly use the GPU.  Communication to the GPU is done through specialized libraries (or APIs) that expose the GPU’s features.  One such well-known library is OpenGL.

OpenGL was created to allow games to talk directly to the GPU and render mind-blowing, very distracting (for me at least) images and animations on screen.  Because of GPGPU, OpenGL is now used to program heavy mathematical calculations.

You might ask; why can’t we just develop and use specialized libraries for GPGPU?  Well, in fact, we already have such libraries existing.  But they’re either proprietary (e.g. NVIDIA’s CUDA, ATI’s CTM) or too young (e.g. OpenCL).  Because of this, we have chosen to use OpenGL for its wealth of references, GPGPU or otherwise, on the internet and the stability of OpenGL implementations.

Stop beating around the bushes!

All right, you don’t need to be so harsh…  :)

My task involves modifying GEGL to use GPGPU.  That is, to modify GEGL to support doing pixel operations like blur, brighten, etc. on the GPU in addition to their existing CPU implementations.  To accomplish this, I have to:

  1. modify the existing GEGL buffering mechanism to somehow make use of OpenGL textures, and
  2. implement some pixel operations to expect OpenGL textures instead of pixels from main memory.

It’s that simple, really.  But so far, I’m not quite sure if it’s that easy when we get to the nitty-gritty parts.  We’ll discuss about the issues in later posts.

The advantages of using the GPU is easy to see when you learn that typical video cards are heavily parallel; a NVIDIA GeForce 8600GT card has about 32 cores and an ATI HD4890 card has about 800 cores!1 This is in stark contrast to consumer-available CPUs that have a maximum of 8 (logical) processors (i.e. Intel Core i7).  Imagine how much improvement in performance this brings!