Showing posts with label Coding. Show all posts
Showing posts with label Coding. Show all posts

Life

Just an update to let anyone reading this blog know that the project from my last post was put on the back burner for a bit while I worked out other things in life. More specifically, I've been looking into grabbing a job. I recently had an on-site interview with Facebook which went relatively well, but I did happen to stumble a little on the technical interviews. In a couple of days I'll be hearing back about that. Looks like a great place to work!

Anyways, back to the project. It's getting pretty close to release, so expect a blog post in the near future pointing to the repository. I'm mostly improving the documentation, and working out the workflow for release. There's a couple of design issues I have with the project currently, but I may push things to GitHub before I take care of those.

Keep checking back for updates!

-- UPDATE -- 
Looks like I didn't get the job. Unfortunately, my lack of experience played against me and my stumbling.

New Projects

It's been quite some time since my last post (this seems to be a regular occurrence, doesn't it?). Well, fear not, I still exist. Unfortunately the score editor project had to be put on the back burner for now, as I've picked up a real job (only a short-term contract), which leads into my discussion in this post.

Working on a real project has really helped me learn many things, but especially good design techniques. In particular, my project will be used as a plugin for an existing piece of open source software (Phon, a tool for phonological analysis). To not see the past few months worth of work go to waste, I've been given permission to take this project on as my own (it'll show up on my GitHub account eventually), so I'm really designing this to be something that's easily usable and extendable by others. In its simplest form, it's a framework for designing complex operations from simpler ones (think Quartz Composer, but more general).

Since I'm designing something that will be used by others, I really need to think twice about every decision I make. Some discussion points on my experience so far:
  • I want my API to be final (in the Java sense of the word) so that people can depend on it (e.g., backwards-compatibility), but I still want the API to be extendable. I've implemented a simple extension mechanism that permits this freedom. Anyone can simply query an API structure for a certain extension, and if it exists they will get an instance of that extension to work with. Otherwise, they get a whole buch of null.
  • Modular programming can be a wonderful thing for an API. When I started, I just had one massive project. When I chose Maven for building, I decided to move to a modular design. What I ended up with is a set of modules (e.g., API, GUI, XML IO) that are tightly coupled. Right now I'm working on decoupling these modules as much as possible and, soon enough, modules will depend only on the API module, which is the way it should be.
  • I've never really designed anything modularly before, but what I've learned is that dependency injection (DI) is a beautiful thing. In particular, I make extensive use of service discovery to get implementations of what I need. I may eventually look into OSGi, but don't want to be bound to it, so I've abstracted away my service discovery mechanism so even it too is determined through DI.
It has been a fun time designing this API, and I'm hoping that within two weeks I'll have it completely modular, and ready to go for Phon. A couple of weeks after that, it'll be up on my GitHub. I'm really hoping it's something people will want to use, and if they do, a framework they enjoy working with!

Revamping The Data Model

When coding the core data model for the score editor, I tried my best to keep any view-related information out of it. This was in an attempt to follow the Model-View-Controller (MVC) architectural pattern. Unfortunately, what I've realized is that this separation is actually making my life more difficult. What I've come to realize is that in my own case, this separation doesn't even make sense.

Why? Well I'm not trying to create a core music library that can be reused by others (not anytime in the near future anyways), but rather I'm attempting to create a professional and free piece of software so that people can create, edit, playback, and print musical scores and tablature. Intrinsic to that purpose is the display properties of the various elements in a score. In other words, view-related properties are part of the data model.

As another teaser, here's how far I've gotten so far in the rendering process:


There's a lot of work to be done, but I was getting close to something presentable. The need to tackle the display properties of elements and somehow integrate this into the data model is my next step, and unfortunately that will take time. Right now I model only the very basic elements:
  • Score
  • Part
  • Staff
  • Bar
  • Column (a single note / chord)
  • Note
  • Instrument
along with a few other small things (e.g., note bends, rests, grace notes). What I need to do is include other display elements in the model so that I can more easily allow the user to modify anything and everything in the score. For example, I should model a beam so that the user can change its slope. I should model the note head so that the user can change its size, color, etc. I should model the stem, so that the user can change its length. I would provide an automated system that will do its best to get the score as close as possible to what the user would like, but a "one size fits all" solution doesn't exist because everyone has their own preferences.

Oh the joys of designing a serious piece of software!

Data + Rendering -- Design Decisions

Sorry for the lack of posts over the summer, but it was my time to just relax and do very little after a lot of hard work to finish my M.Sc. thesis. I've finally gotten a start on score rendering (see image below). I'm currently at an interesting point where I have to make a design decision. For a part in a score there can be multiple staves. For example, a piano score usually has two: one for treble clef and one for bass clef. For guitar tabs, there is often the tablature and then the score notation affiliated with this tablature.

My design decision is focused on the best way to structure my data hierarchy and rendering process to render this. In particular, I'm focused on tablature. The tricky thing here is the fact that the score and the tab are two different views of the same data. I have considered two possibilities:
  1. Consider score and tablature two different staves that reference the same set of data. This comes with a set of things to think about:
    • Pros
      • The rendering process can blindly render everything
    • Cons
      • Other parts of the program should know that these refer to the same set of data (e.g., when saving to file).
      • Right now my data hierarchy is represented using a parent/child relationship. Since these two staves point to the same set of bars, each bar would technically have two parents. I'd rather not change the way things are currently, so I would just have to make sure that in no situation it would be a problem getting the initial parent.
      • User will most likely have to manually remove these staves. In other words, it might not be easy to implement a "Show Score/Show Tablature/Show Both" option. Maybe this isn't really much of a con?
      • Without any code that remembers the connection between the two staves, the user would be able to insert another staff in between them. Now if the user chooses to do this, it's his/her own choice so this may not be a bad thing, but it breaks up the connection between the two and the fact that they are connected (i.e., by the same data). Again, maybe this isn't really a con?
  2. Restrict this merely to the rendering process
    • Pros
      • Does not require any changes to the data hierarchy
    • Cons
      • Less flexible
      • Most likely will produce sloppier rendering code.
      • Code for user interaction would be uglier. For example, when the user clicks on the score rendering component, I need to figure out which staff is clicked. I would have to write code that checks the view type (score/tab/both) and understands that some staves would be rendered twice in the "both" viewing mode.
It's just one of those bigger decisions I have to make early. I'm pretty sure I'll go with the first one, but I'd like to hear thoughts and/or suggestions from other people (who I haven't confused yet).

And for those who want to see my latest work, click here to get a feel for the current state of score rendering. It's just the basics for now (note heads/stems), so I have a lot of work to do still (e.g., beams, grace notes).

Back in Business

The score editor project has come off the back burner and is now up front. I still have work to do with regards to my thesis but I expect I'll have that done before mid-summer. Right now the focus is on the following:
  • Completing the port from Java. This task is mostly done, and we just need to implement
    • MIDI playback, and
    • loading/saving of project files.
  • Rendering of both scores and tablature.
  • Create a website (partially done).
  • Fix some bugs and glitchy behaviour.'
So not much really. I'm really hoping to get the first public beta out before 2012, so here's hoping! For anyone interested, here are some of the technologies I'm currently using, all of which I enjoy:
  • Qt SDK
    • Qt Libraries
    • Qt Creator (highly recommended for C++ dev)
  • Boost C++ Libraries
    • Mostly for Boost Signals, which I prefer over the Qt signals/slots system due to it being far more flexible
  • Redmine
    • For internal project management (currently)
  • Django + virtualenv
    • For website dev (I edit everything with vim)
  • Inkscape
    • For vector graphics, which we use to produce the paths for various shapes (e.g., clefs and rests)
  • Git
    • For version control, which I highly recommend. Who knew branch-based development could be so easy? I also love being able to commit locally, and manually tweaking my commits.

Elided Labels in Qt

So for one of my projects I was dissatisfied with the fact that a QLabel whose horizontal size policy is Qt::Ignored will have its text clipped instead of having an ellipsis at the end (or somewhere in there). I whipped together a simple extension to QLabel that puts an ellipsis at the end based on the current size of the label. It's not complete in general (e.g., doesn't really support multiple lines), but for me it gets the job done. Feel free to use this code for whatever purpose you please (i.e., it's in the public domain).

elidedlabel.hpp
elidedlabel.cpp

A New Look

Well, the score editor project my friend and I have been working on has taken a bit of a regression. In particular, we decided to switch to Qt and C++ because Swing just wasn't doing it for us. It just lacked in a native feel, particularly on the Mac.

So there's nothing much new here, but by using QGraphicsView we have been able to really do some neat stuff. In particular, printing was a breeze but also exporting PDFs and supporting zoom. We also decided to display the score in pages instead of one long, unified page. I think it gives it a more professional feel, and also shows you exactly how it will look when printed. Anyways, a screen shot showing the zooming out, and also a new splash screen that yours truly put together. Not too shabby, but it still needs a bit more pizazz.



FIZZICKS!!!!

I haven't done a whole lot with my game engine stuff over the past week (been focusing on implementing GPU splatting for my research), but I decided to capture a video today. It's a little low quality, but it shows off the basics. For the most part, there's enough functionality in there to start working on a game, but I want to make the code simpler and easier to work with. Anyways, here's the video:



Currently I'm using Bullet for Physics, CEGUI for the in-game GUI, DevIL for loading images, OpenGL for rendering and a whole lot of boost to make my life easier. The windowing (Cocoa, Carbon, X11 or Win32) is my own. Eventually I might do the same for the image loading and in-game GUI so that there are less dependencies, but for now I don't really care too much about that. I'm planning out a simple game to make eventually, so look forward to that in the future. Perhaps not the very near future, but quite possibly by the end of the year.

Modeling 101

Okay, this is far from me giving you a 101 class on modeling, because when it comes to drawing/modeling/things of that nature I suck pretty bad. Nevertheless, I amazed myself at how quickly I could whip up a "stick man" model with a basic skeleton using Blender.



He's in a sitting pose, waving I think. Yeah, I have skills *cough*. Anyways, other than a crash here and there, Blender is a pretty decent piece of software. Some of the keyboard shortcuts are non-intuitive but once you get them down you'll be unstoppable.

That screenshots also shows off my integration of CEGUI into my WIP. I struggled with two issues that I'll share with people. The first issue was that you need to have the OpenGL viewport set up correctly when initializing CEGUI. Unfortunately my initialization was occurring before I called glViewport, but that was easily resolved. The second thing was that having any VBOs/VAOs bound messes up CEGUI (for now). Be sure to release any bound buffers/arrays before rendering things.

CEGUI is a pretty hefty library, but it looks to be incredibly robust and still actively developed. I loved how I could manually inject input events into the CEGUI system, which meant easy integration into my own windowing system. I noticed that some other systems used existing input libraries or hooked into something like GLUT or SDL. I also love the ability to use describe things in XML. I think CEGUI and I will get along quite fine until I decide to write my own UI system.

There was a bit of messing around getting things to link properly, but I managed to get everything working in the end. I'll finish off by pointing out qmake. I use it for my Makefile generation and I absolutely love it, mainly because it's incredibly simple. CMake is another option if you happen to be looking for one.

We got monkeys!

Yep, we do have monkeys. Blender monkeys, to be exact. I whipped up a simple loader for Wavefront OBJ models. Only loads the basic geometry now, so I have to work on the material stuff. One problem is my lack of a shader that does more advanced illumination, so that's something I have to work on. The two screenshots I've posted only do per-pixel lighting with Lambertian reflectance. I also want to make it so that my OBJ loader doesn't reproduce verticies with the same position + normal + texture coordinates.



I have also gotten things working with Cocoa. I do the event loop myself to make things easier, but it's all good. Full screen mode works in Cocoa too, but not in X11/Win32 because I haven't had the chance to do so yet. And with Cocoa I'm briefly gonna bring up the PIMPL pattern and how I made use of it.

The PIMPL pattern is a way of hiding an implementation from translation units other than the translation unit that implements some class. This is possible because you can have a class member that is a pointer to a type that has only been forward declared.

So why did I need this? Well I needed to store Objective-C pointers (i.e., NSWindow *) in my Window class. The problem here is that everything works wonderfully in Objective-C files, but everyone else chokes when including Window.h because they get confused by the Objective-C code in <Cocoa/Cocoa.h>. I couldn't really forward declare these Objective-C classes either, because again I have the same problem: Objective-C in a C++ file just doesn't work (as far as I know?). Enter the PIMPL. I forward declare a struct that will hold these Objective-C pointers and define that struct in my Objective-C implementation of the Window class. Problem solved, whahay!

Game Design

So, one thing that I've been working on over the past week is reviving my "game engine" that I started working on a couple of years ago. It's not far, but I'm currently happy with the way things are going:
Game Engine Screenshot

Pretty simple, I know. Shows off some basic texturing and per-pixel lighting, but that's about all I got for now. Currently this is all done in OpenGL, but I've abstracted many of the concepts in such a way so that I could easily swap in a DirectX renderer. This is done at compile-time though, so that I avoid dealing with dynamic libraries.

It's pretty easy to get something up and running too. I just extend an application class, which takes care of some nitty gritty details for me (e.g., the game loop). With the help of some compile-time polymorphism, I just override three methods: initialize, update, and render. Also, input handling (keyboard + mouse) is currently implemented using mappings to boost::functions. These mappings are registered at runtime, generally in the initialize method one specifies.

There's plenty of work to be done, but I think it's a good start. Some things I need to work on:
  • Resource Management. Currently all resource management is done through boost:shared_ptr. In other words, there isn't really any "management" happening. I need some form of a manager that can load various assets, caching things as necessary to help with efficiency.
  • Higher-level Primitives. Currently I send triangle primitives to the renderer, which is a little too low level for my liking. I'd like to pick a set of base primitives that are at a higher-level (e.g., triangular mesh, model, terrain)
  • Deferred Rendering. I've always wanted to take a stab at implementing deferred rendering. I'd just love to play around with shaders in general and getting some fancy stuff on the go.

Of course, there's a lot more than that -- Terrain LoD, Character Animations, GUI Rendering, Game Logic, Physics, etc -- but it's a start!

Components, systems, subsystems, entities...*collapses*

So I've recently been reading into various articles and forum topics on Component-Based design, or Entity Systems (ES). This concept was probably first used in games when the first Dungeon Siege came out. Apparently it's closest neighbour is possibly that of Aspect-Oriented programming, which I know zero about. At first it seems like a complicated idea, but after I looked through a bunch of info on the net, I think I got the basic idea. Here's my attempt at giving an eagle-eye view of what it is, or rather, what I think it is in the context of a game:

You have three major concepts: entities, components, and systems.

Entity

An entity is a collection of components. These components describe how an entity functions. In terms of a game, an entity is a single instance of an object that exists in your game. So if you have two tanks of the same "type" in your game, you have two entities. Entities do not store data or contain logic. They are simply a means of identifying something.

Components

Components are the basic building blocks for entities. They store information and contain the functionality specific to their existence. For example, you might have a Renderable component in your game which contains all of the logic necessary for the rendering system to display an entity on the screen.

System

Systems have their own set of logic specific to the components that correspond to. I think it is technically possible to implement all of the logic in the system and the component simply be a data container, but I don't think I would like such a design. Often there is a [nearly] one-to-one correspondence between systems and components. You may have a RenderSystem to store all Renderable components and display them when necessary in your game loop. You could also have a PhysicsSystem, AnimationSystem, and so on.


That's my stab at defining those three concepts. The main task in designing your system/game/whatever is to follow the "separation of concerns" idea, each "concern" often mapping to a component/system duo.

So what do you get out of this? Well one thing, it's great for a large development team because, for example, the programmers can offer the game designers a set of components that they can freely use to construct a multitude of objects. These components can be added to an object at runtime to give a dynamic design.

For example, let's say you have a Targetable component which allows the player to select an entity as a target for his/her attacks. Well, let's say we have an NPC that is neutral to the player, but the player does something to provoke the NPC into a battle situation. All we have to do now is attach the Targetable component to this NPC and we're done. The logic surrounding what the player can and cannot attack is simply encapsulated in that component. In the classical OOP approach we would have to define an ITargetable interface that says whether or not the object described by the class is targetable. With ES the existence of a Targetable component on an entity implicitly tells us that the entity can be targeted. With the OOP style, we have to store a variable to say if that object is targetable or not at a specified time.

What one finds in this design is that the deep hierarchy often found in an OOP design is now almost completely flat. Object composition/aggregation is now one's guiding principle. Is this a good thing or a bad thing? One difficulty that often arises is inter-component communication, generally within entities. One could have these components store pointers/references to each other, register callbacks with each other, or use some messaging system to communicate. An example of such communication that seems to often arise is with a physics component. When updated, often animation/spatial components need to be notified of the result.

Well I'm really not sure at the moment whether this design is good or bad, but I think I'm going to give it a shot and document what I find. Things are slow-going with my game [engine] project because it's something I just pick at from time to time. I have no intention on getting a product out the door right away, so I use it to play with new ideas.

The reason I posted this entry was to look for other people's thoughts on this design style, and to give some concrete pros/cons from one's own experience. I'd love to hear about anything surrounding this topic, which includes other design patterns and styles that one found useful in one's own project.

Pointgrey Cameras

For anyone who doesn't know my research, I'm looking into stereo vision algorithms in an underwater camera array. What I'm hoping to do is a rough scene reconstruction that has improved results over blindly using an existing stereo algorithm.

Now on to what this post is about. I found a little subtle and, as far as I can tell, undocumented feature of the Pointgrey cameras. If you use the FlyCapture API then you would normally call flycaptureStart and flycaptureStop to start/stop grabbing images, respectively. For our purposes, we only need to grab single shots from the camera array, not streamed video. On top of that, we want the shots to be (for the most part) synchronized across the whole array.

Here's the twist, the start/stop calls aren't really what starts the "image grabbing" process, but simply powering up/down the camera (via the CAMERA_POWER register). The start/stop calls appear to only lock down/release a DMA context for the purposes of streaming. That means, if you have a firewire card with the TI chipset you can only start 4 cameras simultaneously.

So how do we grab a synchronized set of images using this knowledge? Well this is only applicable to the Grasshopper line of cameras since they have an on-board framebuffer that you can control. Here's what we do:
  1. Place the cameras in "image hold" mode by setting bit 6 to 1 in the IMAGE_RETRANSMIT (0x12F8) register. This will store images on the camera.
  2. Power up the cameras by setting bit 31 to 1 using the CAMERA_POWER register.
  3. Simply wait around long enough for the frame buffers to fill up
  4. Put each camera in trigger mode by setting bit 6 to 1 in the TRIGGER_MODE (0x830) register. What this does is prevent any more images from getting stored in the frame buffer.
  5. For each camera
    1. Start the camera so you can transfer data over to your system
    2. Set bits 24-31 to 0x01 in the IMAGE_RETRANSMIT register to transfer an image.
    3. Stop the camera

This works great for using the cameras in a non-streaming context where you only have a single firewire bus/system to work with. If you want the images to be synchronized, be sure to set them all to the same FPS, and enable image timestamping (FRAME_INFO register, 0x12E8). Now all you do is find the set of images across all cameras which are closest together.

One other subtle thing I found with is that if you start the camera with the same dimensional parameters, but with a different bandwidth parameter, the on-board frame buffer will be flushed. Anyways, that's it for this post. I thought it would be nice to post this information just in case someone else out there has had, or will have, a similar issue. Cheers!

Java Annotations: A [Somewhat] Brief Introduction

So in my last post where I described a messaging system we implemented, I also mentioned our use of annotations. I thought it would be appropriate to write a follow-up post with a brief introduction to them, so here it is. I'm going to talk about annotations in the Java sense, but a lot of this propagates to other [reflective] languages too.

Annotations are often defined as "notes of explanation or comments added to text". In Java, one can regard an annotation as metadata attached to code, a piece of code describing code. Well, the first question that one might ask is "Why do I need this when I can just use comments?" and indeed, this question makes for an excellent starting point when writing an introduction. Well, probably the main reason to have annotations over comments is that annotations are part of the language, with a specific syntax, which in turn allows parsers to easily understand them. Comments, on the other hand, could be in any form and would be a huge mess to understand by a compiler, unless a certain standard was specified.

"So what good are they for?" you might ask next. Well, they can let you do some pretty neat stuff. First I should describe the three flavors of annotations in Java:

source code

Present at the source code level only and will be discarded by the compiler during compilation. These guys are useful to give hints to compilers and programs as to the nature of the code itself. For example, one may have seen @Override or @Deprecated annotations when using Eclipse or other code. The former specifies that an error should be produced if the method doesn't override one in a super class, and the latter indicates a class, method, etc. as deprecated.

class

This is the default flavor. Compilers will produce bytecode containing these annotations, but you probably won't be able to access these during runtime. Useful if you are doing bytecode analysis of code.

runtime

To me, possibly one of the more useful flavors of annotations for developers. These annotations can be requested at runtime, which allows you to do neat tricks. The reason this is possible is due to the reflective nature of Java. You can access this through the Class.getAnnotations() method


So how do we create an annotation? Well, it's pretty straightforward. Note that I'll be using the messaging system from my previous post as an example. Here's what the Message annotation looks like:

@Retention(value = RetentionPolicy.RUNTIME)
@Target(value = {ElementType.FIELD})
public @interface Message {
Class<?>[] signature() default {};
}

So the first line says we want to be able to retrieve this annotation at runtime. Note the use of the @ symbol here. This is the notation used for annotations. The second line says what type of things this annotation can be applied to. In the above example, it can only be applied to a field in a class, but not methods, or classes themselves. Note the @ symbol before the interface keyword in the third line. This is how we define an annotation. Finally, the fourth line specifies the one and only property in our annotation, and that's an array of classes that specify the signature of the message (i.e., the types for the data that will accompany a message). We specify the default signature to be an empty array. What's interesting in this example is that we used annotations to describe an annotation itself (@Retention and @Target describe @Message).

For our messaging system, the field itself is a static member that is a String, and that string defines the name of the message. For example,

@Message(signature = {String.class})
public final static String MYMESSAGE = "myMessageName";

describes a message with the name "myMessageName" which sends a String argument to all receiving functions. If we wanted to, we could have defined a second property in the annotation for the message name. In our message delivery class, we can then loop through all the fields in a class to register messages like this:

public void registerSender(Class<? extends MessageSender> sender) {
MessageData msgData = getData(sender);
for(Field field : sender.getDeclaredFields()) {
if(field.isAnnotationPresent(Message.class)) {
if((field.getModifiers() & Modifier.STATIC) == 0)
continue;

Message msg = field.getAnnotation(Message.class);
msgData.addMessage(field.get(null).toString(), msg.signature());
}
}
}

Note that, for simplicity, I excluded the try/catch blocks and log messages in the above. A fairly straightforward piece of code: for each field in the class, if the field is static and has the Message annotation, we add the message to the set of messages we understand. This is far more convenient than having to register each individual message. For message receivers we have a ReceiverMethod annotation that I won't explain, but it looks something like this:

@Retention(RetentionPolicy.RUNTIME)
@Target(value = { ElementType.METHOD })
public @interface ReceiverMethod {
// Special message name which allows catching all messages from a sender
public static final String CATCHALL = "<<all>>";

// Properties
public Class senderClass();
public String message();
}

We can then do something similar to the registerSender method above to register our receiver. So that's my quick introduction to annotations. Maybe you can find other interesting ways to make use of these little critters in your own programs.

Listening In

So it came to our attention recently that our application was making abundant use of the Observer/Listener pattern. For those not familiar with this pattern, you'd use this guy when you want the outside world to know about state changes in an object. This pattern is used often when developing with various architectural patterns, such as Model-View-Controller (MVC). Other examples, in Java, include many of the components in Swing, java.util.Observable, and java.beans.PropertyChangeListener.

I personally had issues with the extensive use of the listener pattern for us. Every time we added a new data class, we'd have to rewrite code for storing and notifying the listeners. Note that some of this could have been factored out into its own class. Nevertheless, if we ever changed the listener interface, change had to be made for everyone using that interface. Since we use Eclipse, this isn't too big of an issue, but it still bugged me a little. The final issue was, sometimes things needed to listen to events on EVERY instance of a data class, not just a specific instance. To make our lives a little easier, this required us to create a new type of listener that would be statically available (read: like a singleton).

My alternative was to create a messaging system. Before I started this messaging system, I thought about design so that I could generalize it so that it may be of use to others. A couple of the main design decisions that came up were:
  • There should be a concept of a message sender, receiver, and a delivery system to coordinate message sending and receiving.
  • There should be registration facilities to allow the system to become as type-safe and interface-like as possible.
  • Java annotations (retention set to RUNTIME) will be used to define messages and receiver methods
  • When registering a specific receiver instance to receive messages, weak references should be used so that a) the outside world doesn't have to concern themselves with unregistering the instance and b) considering (a), so that the garbage collector can destroy that instance (when necessary).

A test program I wrote up that uses this message system looks something like this. I personally find this a reasonably elegant system, for one that uses reflection. So what do we get out of this? I'll start with the cons (that I can think of) followed by (what I consider to be) the pros:
  • Cons
    • We lose a lot of compile-time error checking
    • We introduce some overhead, mainly due to using the reflection API
  • Pros
    • Adding or removing messages (generally) will require less work elsewhere in the code
    • Receivers only need to implement the messages they want to receive
    • Receivers are not required to name their methods as per an interface
    • Receivers can define what I call "catchall" methods, methods that accept all messages from a specified sender (this could also be done using the observable/listener pattern too, but I believe it would be a little less elegant)
    • Receivers define an accept method which allows them to dynamically control which instances they receive messages from

Currently I'm holding on to this until I feel it satisfies the needs of our score editing project completely, but after that I think I'll release it to the public so that someone else might find some use out of it.

Beginnings

So I'm going to start my blogging off with an introduction to my project: a free cross platform musical score editor (which currently has no name). Our team consists of just myself and a classmate from my undergrad. We previously worked on a team project during our undergrad (a required course), so we were already familiar with team development.

For me, projects come in two flavors:
  1. those I do simply for my own personal enjoyment, such as a game and,
  2. those that fill a need for me, such as small scripts to get repetitive tasks done.
The musical score editor falls into category 2, but it is definitely an enjoyable project too. I personally found myself unhappy with existing free software for editing scores. Since I focus on guitar, I was looking at something with a simple interface to whip up a guitar tab and be on my way. Probably the best I could find is TuxGuitar, but it was far from a pleasurable experience. This established a need for me, one whose solution we will eventually share with others. So with a project idea, the next thing was to lay out some basic requirements:
  • Cross-platform. I am an OS X user, and my friend is a Linux user.
  • An interface that is both simple for the first-time user, but powerful for the more advanced users.
  • Quick keyboard access to the most common commands to greatly improve throughput.
  • Fully-featured. We want users to be able to do just about anything and everything they'd want to do with their musical scores. Clearly this will take time, but it is our goal.
With these requirements in mind, we decided that Java would make our lives far simpler. We chose Swing over SWT for our GUI library, since we both know and enjoy Swing. Our goal is to eventually bring this project to a level comparable to that of commercial software. It's a big goal, but we're extremely motivated and really enjoy this project. Anyways, some things I plan to blog about in the near future:
  • Java: not always that cross-platform. Various topics on producing code and user-interfaces that feel more native.
  • Working with JNI.
  • Developing a flexible and easy-to-use plugin system.
  • Object-based rendering systems: the pros and the cons.
  • Other cool stuff!
We already have a highly functional and [mostly] stable version of our score editor internally, but we want our first public release to really be something amazing. We have many incredibly powerful features planned, some of which we have never seen before in the area of score editing. Hence, if you're reading this entry you should stay tuned for some good stuff! I'd post a teaser screenshot, but everyone likes a bit of suspense :)

What's up?

I've been quite inactive as of late. One reason being that I was truly inactive, the other being that my blog was locked temporarily because it was tagged as a "spam" blog!

Nevertheless, I'm back in action and I hope to start posting some things on one of my new interests: ray tracing. I have a ray tracer up and running and things are going pretty good so far. Current features include:
  • Standard reflection + refraction (no special laws yet like, for example, Fresnel)
  • Several geometries: triangles, planes, spheres, boxes, quads, and cylinders
  • Anti-aliasing
  • Lighting: area, point, directional
  • Diffuse reflections
And some plans for the near future:
  • Support for loading several common model files
  • A space partitioning scheme to reduce rendering time
  • Transformed geometry (i.e., geometry + transformation matrix)
  • Support for instancing (multiple instances of the same geometry)
  • Python bindings to control/define scenes and the ray tracer
  • Photon mapping
  • Radiosity rendering
But some of those will take time. The list is sorted in order of highest priority at the top, to the lowest priority at the bottom. That's about it. And just to whet your appetite, I will leave you with my most current rendering:

Problem Solving & TopCoder

There's no argument in the fact that problem solving is a vital part of our intellectual functions and something that helps keep the mind sharp. Even something as simple as a logic puzzle, such as Sudoku, can help influence our mental well being.

One of my favorite activities to keep my problem solving sharp is participating in competitions at TopCoder, a company dedicated towards providing online competitions in algorithms, component design, and so on. I personally participate in the algorithms competitions. I highly recommend for everyone who can program in Java/C/C++/C# to participate from time to time, even if you feel like you have no chance. If anything, it will keep your mind and your programming skills sharp. The scoring system is based on time, so you learn to gear your mind towards immediately being able to categorize problems and then developing a solution.

Currently, the TopCoder Collegiate Challenge is underway, and I have surpassed my expectations this year. Last year I only made it past the qualification rounds, but this year I have matched that and even made it past two more online rounds. If I am able to make it pass two more online rounds then I would have the opportunity to go to Orland and compete against 47 of the world's finest. I don't expect to make it this far from a realistic point-of-view, not a pessimistic one. Nevertheless, I will definitely try my best to make it even further.

Don't think it's just your physical self that needs to be kept "in shape"!

End of July Update

Yet again, not much to update on. We're just beating away at the newly merged trunk and trying to fix up some issues before the 0.5 release. After that it'll just be more issue fixing. Considering this summer I've been taking a course on ordinary differential equations I may consider working on the facilities in SymPy that solve ODEs. Only one more month left to the summer but a lot has gotten done. I'm looking forward to it though, to see what state SymPy is in.

I'm also looking forward to school starting again. Ignoring work, this summer has been far from what I hoped it would be, but I'm probably partly to blame for that one. As for school, I'll be taking these courses:
  • Integration and Metric Spaces
  • Abstract Algebra
  • Combinatorial Analysis
  • Computer Graphics
  • Computational Complexity
I'm not looking forward to the first one, but the rest of them should be alright. Although it might be a bit of work, I expect Computer Graphics to be somewhat enjoyable, or as enjoyable as a school-related item can be! This year will be my last year of regular courses. I'm hoping to have another research award next Summer and then the Fall following it will be my honors thesis, and I have no idea what to do for that at the moment. So at the end of '08 I'll officially have my undergraduate degree from MUN: a joint major in Computer Science and Pure Mathematics.

GSoC Update

It's been quite some time since I've had an update, so I felt like it is only due time to write a little something. There's been a lot of "off to the side" work. Preparations are currently under way to merge the research branch of SymPy with the actual trunk. This is a fair amount of work, and hence we're trying to coordinate everything so that things run smoothly. I have rewritten unit tests to be supported by py.test, but there is also the moving/updating of tests from the trunk and ensuring modules still work along with their tests. This whole merge will be my major task for the next little while.

I will be heading to WADS 2007, the Workshop on Algorithms and Data Structures, during mid August. It is a conference that [sort of] deals with my current research area so I was advised to go. I'm doing so for not only that reason, but also because I would like to see if the academia lifestyle is the one for me. The conference looks to have some interesting papers, and I bet there will be some interesting people to meet.