GSoC and doing pixel operations on the GPU

June 1, 2009

Before anything else, I just would like to say that GSoC is awesome!  Sure, I still would’ve joined the free software movement without this program, but you have to admit that it is such a fantastic annual event for lots of different reasons that I, and many like me, couldn’t have jumped on the bandwagon this early if not for GSoC.

You see, I’m torn between academic pressures and conservative people who think that free software is either a cult hype or something that doesn’t make business sense.  It’s nice to just say that, “Yes, I am a free software developer and my salary is higher than yours!  Hah!”  Now, really, I can’t just say that because:

  1. I care for some of them,
  2. I don’t for most of them, and
  3. I am not a free software developer (yet).

So yeah, kudos to Google and the free software organizations who made this possible.  Especially to those people, Google employees or otherwise, who spent/are spending their time answering our stupid and (sometimes) paranoid questions on the mailing lists and IRC.  I need not mention your names, you know who you are.

GIMP’s Knight in Shining Armor: GEGL

It has always been known that GIMP is not feature-complete for *all* workflows and use cases.  (In fact, one could say that the same is true for most applications.  People just looove to complain, nevertheless. :p)  Some users get frustrated when they request for certain features only to discover that most of them are either planned (but aren’t implemented yet) or will never be implemented.

One such awaited feature is support for higher bit-depth images.  This and many other limitations have magnified GIMP’s somehow notorious reputation of not being suitable for certain workflows.  For certain other workflows, however, GIMP does pretty well.  I should know because I have used GIMP as a web-graphics artist before.

As an effort to address many of these limitations, the GIMP developers started hacking on a new graphics core that would support the desired features on the get-go.  Thereby, dodging ugly and misplaced refractorings that would otherwise make GIMP unstable.  To help with the latter purpose, the aforementioned graphics core, GEGL, was developed as a separate library.

Now, sometime around late 2006, GEGL was deemed stable enough to be actually usable.  Starting with GIMP 2.6, the developers started integrating GEGL into GIMP.

As I have said in the previous post, my GSoC has something to do with GIMP and its new graphics core, GEGL.  To expound on that, my work won’t really touch any parts of the GIMP’s sources.  Rather, I would be adding features to GEGL.  However, since GIMP will rely heavily on GEGL to do most of the graphics tasks, GIMP users will also experience the benefits of my project.

General-Purpose Computing on Graphics Processing Units

Otherwise known as GPGPU (don’t ask me what happened to the ‘c’ in computing, I don’t know :).  GPGPU is an ongoing trend to utilize the GPU (Graphics Processing Unit, found on modern video cards) for general computing.  This is an interesting notion because none of this utilization can be automated (well, not yet at least).  Meaning, that because the GPU is so specialized, you can’t just write code that runs on the CPU and expect the same code to run on the GPU as well.  Also, not all applications can be rewritten to use the GPU; only a subset of the programming problems will fit to GPGPU.

These are the reasons why applications that want to take advantage of GPGPU techniques must be rewritten manually to explicitly use the GPU.  Communication to the GPU is done through specialized libraries (or APIs) that expose the GPU’s features.  One such well-known library is OpenGL.

OpenGL was created to allow games to talk directly to the GPU and render mind-blowing, very distracting (for me at least) images and animations on screen.  Because of GPGPU, OpenGL is now used to program heavy mathematical calculations.

You might ask; why can’t we just develop and use specialized libraries for GPGPU?  Well, in fact, we already have such libraries existing.  But they’re either proprietary (e.g. NVIDIA’s CUDA, ATI’s CTM) or too young (e.g. OpenCL).  Because of this, we have chosen to use OpenGL for its wealth of references, GPGPU or otherwise, on the internet and the stability of OpenGL implementations.

Stop beating around the bushes!

All right, you don’t need to be so harsh…  :)

My task involves modifying GEGL to use GPGPU.  That is, to modify GEGL to support doing pixel operations like blur, brighten, etc. on the GPU in addition to their existing CPU implementations.  To accomplish this, I have to:

  1. modify the existing GEGL buffering mechanism to somehow make use of OpenGL textures, and
  2. implement some pixel operations to expect OpenGL textures instead of pixels from main memory.

It’s that simple, really.  But so far, I’m not quite sure if it’s that easy when we get to the nitty-gritty parts.  We’ll discuss about the issues in later posts.

The advantages of using the GPU is easy to see when you learn that typical video cards are heavily parallel; a NVIDIA GeForce 8600GT card has about 32 cores and an ATI HD4890 card has about 800 cores!1 This is in stark contrast to consumer-available CPUs that have a maximum of 8 (logical) processors (i.e. Intel Core i7).  Imagine how much improvement in performance this brings!

Advertisements

9 Responses to “GSoC and doing pixel operations on the GPU”

  1. gladys said

    nosebleed.

  2. dAVe said

    Goodluck on this. char, ako rani.hehehe

  3. t.....en said

    Bro, this is very interesting! I also like to learn how to use PC hardwares when doing pixel operations. Can you teach me?

  4. I found your blog on google and read a few of your other posts. I just added you to my Google News Reader. Keep up the good work. Look forward to reading more from you in the future.

  5. gladys said

    hahahaha. :)

  6. danipga said

    Hi,
    first of all, congratulations for your work and collaboration in GCoS.

    I’m trying to parallelize GEGL code in some different ways, mainly in a thread-based way, but I’m also looking forward to parallelize it using a GPGU. So, I think your work is very interesting. Is the code your modifying for the GSoC public available? I was thinking about trying to use CUDA (the only technology I know about GPGU), and maybe I could give a little help in your work.

    • Daerd said

      Hello,

      Thanks for your interest.

      I’m not really familiar with CUDA and so I’m quite unsure what you mean by “thread-based.” I totally intend to make my implementation thread-safe though. That is, different threads should be able to access the pipeline in a safe way. The code which currently lives in GNOME’s git repositories[1] isn’t thread-safe. But we’ll get there.

      I’m sure you know this, but we’ve eliminated all possibilities of using CUDA for GEGL, we’re using OpenGL instead. If you still want to help, feel free to leave your email address and we’ll take it from there.

      Update: Consider subscribing to GEGL’s mailing list[2] and/or lurk in #gegl at irc.gimp.org. We’d be happy to have you on board!

      Cheers!

      [1] http://git.gnome.org/cgit/gegl/. My modifications live in the gsoc2009-gpu branch.
      [2] http://lists.xcf.berkeley.edu/lists/gegl-developer/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: