Tuesday 3 December 2013

Oculus Rift: First Impression

First Impression


I was very lucky today to experiment with an Oculus Rift Development Kit. By now, I am probably several months late to do an elaborate review of the packaging-installation-performance evaluation; you already know many of that stuff from many other posts from the Oculus community (e.g. r/oculus). However, the experience of playing with a Rift as a user was unprecedented and I believe that some thoughts are worthsharing.

Firstly, the Rift came in a beautiful case that absolutely protects the device and everything was included: from UK, US plugs to HDMI cable, mini-usb and a usb-to-DVI adapter. The professionalism that the Oculus team shows by this "product" presentation with the SDK is amazing. 

Unfortunately I wasn't able to try this out at my Ubuntu laptop, as a displayport-to-HDMI converter was necessary; so our first exploration was made from a Sony Vaio equipped with a mainstream NVidia GeForce 330 that did run the demos, but with the FPS (in Rift) varying from 30 to 40. The installation was seamless. No quirks, no missing dlls (except when I first built a demo with some needed DirectX components that come with the redistributable), no surprises. We used the second pair of lenses that came with the Rift. The package included three pairs, A, B and C and the user is free to use which ever he likes in order to achieve the most focus possible for nearsighted people. The 1280x800 (an HD prototype is already presented) (2 displays with resolution 640x800), 32-bit colour, LCD head-display was ready to have it's gyroscope sensor calibrated.

And then the fun started with the free demos from share.oculusvr.com and www.riftenabled.com. Many hours of "playing" and experimenting with the setup followed. Among the demos, the few that I highly recommend-as a must-are the vr.training (inspired by Metal Gear Solid training levels~nostalgia alert~and highly recommended for low-end graphic systems), VR Cinema (that can load an avi, wmv or mkv and play it as if you watch it in a theater), Titans of Space (educational), Blocked In (just a presentation of a single room for an adventure game), Dreadhalls (an absolute atmospheric/horror game), Fallen Angel's Lair (demonstrating UDK, the Unreal Engine), Oculus Tuscany Demo (101 demo from Oculus), Ocean Rift (watch out for the shark) and the coasters from archivision. What will come next to our evaluation is the heavy artillery of gaming (e.g. Half Life 2 - opting in for beta from its Steam menu and by providing the "-vr" command line argument).

The simulator sickness


Before trying Rift, I was sceptical (as everyone) for the simulator sickness that occurs to the user, that results from slight disorientation as a game progresses. The user builds up discomfort from in-game locomotion, rapid rotations, fast changes in elevation etc., which are all some form of acceleration that the brain perceives but the body doesn't actually feel. Some things regarding this matter can be improved dramatically: 1) technologically (like latency and tracking precision) and 2) some other things must be taken under consideration in the game design - HCI level. The developer.oculusvr.com site has a very informative wiki page with guidelines (e.g., placement of camera, displaying text, speed of elements, flashing, providing static references like a cockpit etc. that limit the sickness effect).

Screen-door effect

 

The most annoying thing in the whole process of evaluation is that due to the low resolution of the displays that are extremely close to your eyes you can really see the black lines between pixels. From what I read at the community-press this effect can be eliminated with both higher resolution and better higher pixel-fill rate (to my understanding).

Packaging

Lenses

USB box (transfers sensor data to pc)

Again a rear view

Side view (controlling the distance of the headset from eyes)

I would really recommend the Step into the Game with Oculus Rift and Unity 4.2 video from Unite 2013 by Peter Giokaris who explains elements of the device and gives a primer on the Unity 3D for Rift.

Conclusion 

(as an end user and not API-wise yet)


Oculus Rift is definitely THE future of Virtual Reality (as a computer science field in general). It can have many applications apart from home entertainment to boost productivity and efficiency, enhance (augment) reality, help people with vision problems (as in this recent project about diplopia) and many more. The final product is expected to benefit from the market growth of smart-phones and the need for better, more complex with live colors, high refresh-rate (and of reduced price) displays. 


Tuesday 19 November 2013

Qt, MPRIS2 and Clementine


I have been using Clementine in a daily basis for some months now and I finally feel that I have a solid music player for my Ubuntu box. Clementine is a Qt-based media player that is inspired by Amarok 1.4 and the maintainers have done a great job supporting and extending this project while building a solid and dedicated community about it (follow on Facebook, on Twitter, on Last.fm).

Lately I noticed a little quirk (~where quirk is a low importance abnormality for usual people - but not for my state of mind :P) in Clementine's interaction with the Sound Menu widget in Ubuntu's taskbar (Issue 3962). The quirk was that the open playlists in Clementine weren't reflected at the same time in this widget. This initially led to Arnaud fixing a crash when adding a new playlist, re-opening Clementine, removing the playlist from Clementine and selecting it from Ubuntu's sound menu. But my initial obsession still hadn't been satisfied, so I kept looking for a way to trigger the rebinding of the Sound Menu with the playlist collection, upon each playlist change. I learned several things over the last weekend that I'd like to share but I was also reminded that a lot of IPC is text-based at the user-level and progress is a matter of text-based specifications that won't fully express the semantics of underlying processes.

 

Playing with D-Bus


The Sound Menu communicates with Clementine via the MPRIS2 D-Bus interface specification (mpris2 is the latest specification for communication with media players). Regarding the message-bus for the graphical system of Ubuntu, I knew nothing so I experimented a little bit. However, I could not find a way to refresh the Sound Menu. How was it populated at the initialization phase and why we couldn't find a way to re-populate it on demand? The specification isn't clear about this if you study it in detail. A certain bug reported back in 2011, however shed light to our case:
Sound Menu should re-read playlists from MPRIS apps when PropertiesChanged is posted
the description of which was:
At the moment, the sound menu only ever calls GetPlaylists for an MPRIS app once, immediately after it appears on the bus. If an app's playlists change, there's no way for the app to notify the sound menu that the playlists should be re-read.
As you see from the fix, from there on you could get an update by posting an arbitrary PropertiesChanged to the Sound Menu in order for it to trigger calling the GetPlaylists again. Below, you can see the output of dbus-monitor that captured the traffic of the GetPlaylists method invocation (from the sound panel to Clementine).
method call sender=:1.45 -> dest=org.mpris.MediaPlayer2.clementine serial=425 path=/org/mpris/MediaPlayer2; interface=org.mpris.MediaPlayer2.Playlists; member=GetPlaylists
   uint32 0
   uint32 100
   string "Alphabetical"
   boolean false
method return sender=:1.121 -> dest=:1.45 reply_serial=425
   array [
      struct {
         object path "/org/mpris/MediaPlayer2/Playlists/29"
         string "Playlist 29"
         string ""
      }
      struct {
         object path "/org/mpris/MediaPlayer2/Playlists/30"
         string "Playlist 30"
         string ""
      }
   ]
In the example above the sender that sent/invoked the GetPlaylists messages/command is the com.canonical.indicator.sound service. This command sent four arguments each for every parameter and the response was an array that contained two playlists. Unfortunately, yet another quirk came to light (to us) which I believe is related with the "sound menu caches playlists which causes issues" bug that was fixed (or reported fixed) recently. Here are tools that guided me through the process of experimenting with d-bus.


Additionally, several interesting utilities exist like qdbus (for Qt based applications), mdbus2 (for general introspection) and dbus-send for complete control on sending commands over d-bus. Of course there are visual tools also (like Qt's D-Bus Viewer-qdbusviewer), that will definitely assist you.

Qt

 

I didn't have any previous experience with Qt but a lot from Windows Forms, ASP.net and Silverlight instead to have enough intuition about the general abstraction. I also have some experience from MFC C++ that was enough to catch up easily with the low-level-ness of Qt.

The thing I found interesting about Qt at first glance was the mechanism that it employs to implement signalling between objects. Qt employs a loosely coupled design separating the concerns of the binding process between slots and signals diverging by the callback-based design. In essence signals are emitted when something happens and slots are potential handlers. The gluing code (which signal is going to be handled-or delivered-by what slot) is realized by a connect function. The documentation mentions:
You can connect as many signals as you want to a single slot, and a signal can be connected to as many slots as you need. It is even possible to connect a signal directly to another signal. 
What is of great interest (for the C++ realm) is that the connect functions are promoted with a programming model that is considered type-safe (or just more safe that a callback based design).

Slot and signals can be wired like below:
connect(playlist_manager,       
        SIGNAL(SelectionChanged(QItemSelection&),
        SLOT(SelectionChanged(QItemSelection&));
Note that SIGNAL and SLOT are macro functions that support the static check of whether types match (and certain rules, e.g. about the number of parameters between signal and slots). How are these macros defined?
#define Q_SLOTS 
#define Q_SIGNALS protected 
#define SLOT(a) "1"#a 
#define SIGNAL(a) "2"#a
From the definitions above you understand that something else happens from what you have initially thought. Qt uses the mechanism of a Meta-Object Compiler (moc). Why does Qt use moc for signals and slots? This article provides a reasonable argument that by this way they keep syntax easy to read, generated code is compiled by a standard C++ compiler and performance isn't that of a big compromise for Qt to have used a template metaprogramming solution, instead of moc. You can find more info about the usage of moc at the Using the Meta-Object Compiler (moc) article.

Friday 15 November 2013

"Classes. He would like to replace classes with delegation."

I had the intuition from my software engineering days, but for the last few months (due to our Forsaking Inheritance paper) I have been seeing stuff about dropping inheritance or advocating that inheritance is bad... everywhere.... from very old sources to very recent ones, spanning a period of nearly three decades...
  1. James Gosling 's interview that I first saw some months ago at Simon Peyton Jones' talk at the OPLSS'13 (There were two main questions: What would you take out? What would you put in? To the first, James evoked laughter with the single word: Classes. He would like to replace classes with delegation since doing delegation right would make inheritance go away. But it's like Whack-A-Mole (more laughter) since when you hit one mole, er, problem, another pops up.)
  2. Why extends is evil.
  3. Recent example from the C++ themed online conference GoingNative 2013, Inheritance Is The Base Class of Evil (general example of a talk, demonstrating that inheritance is still discussed as a "it would be best to avoid" feature.)
  4. Patterns for subclassing to avoid embarrassing situations with library clients, The Art of Subclassing.
  5. The are even refactoring options for it (e.g., IntelliJ's "Replace Inheritance with Delegation".)
  6. There are solutions with Java annotations like lombok's @Delegate
  7. delegate(*methods) in Ruby on Rails
  8. Go (the programming language) diverges from the usual notion of subclassing, by embedding types together (without late binding as mentioned explicitly) simplifying composition [SO]. The Gang of 4's crucial principle is "prefer composition to inheritance"; Go makes you follow it. (I liked how the so poster put it).
  9. The End Of Object Inheritance & The Beginning Of Anti-Rumsfeldian Modularity by Augie Fackler and Nathaniel Manista
  10. .... and last but not least a whole category of academic work on Subtyping, Subclassing, and the Trouble with OOP in general (article and refs by Oleg Kiselyov.)

Tuesday 12 November 2013

Scala Specialization: a primer on the translation scheme

Scala's specialization facility is present since 2.8 and is enabled selectively with the @specialized annotation that can annotate generic parameters. This means that the type parameter can be specialized on the specified primitive types to avoid the performance burden of boxing and unboxing. In an effort to understand the basic idea of the translation scheme I created this post for a quick reference of mine.

In a nutshell


The core concept of the translation scheme is that normal method invocations to instances must always work when the instance is not specialized and on the other hand if enough type information exists the compiler will rewrite method calls to the specialized variants.

In the example below, the original code defines a Specialized class, that indicates two specializations, one to Int and the other to Double. This annotated definition creates three generated classes: a generic one that is called Specialized (with regular type erasure, where generic type arguments are erased and substituted by bounds) and two specializations Specialized$mcI$sp and Specialized$mcD$sp. These latter extend Specialized, overriding methods with specialized ones. The interesting part is that each specialized variant overrides the apply method that returns Object and a code generated one. The apply method in each class delegates the call to the generated method that performs an operation on the primitive type itself. Additionally, the generic versions of the methods of the specialized classes (like apply in line 40) are preserved and also specialized versions of methods on the generic class are also preserved like in lines 4,5. This duality in the wiring process ensures both correctness (e.g., calling the generic method apply in the specialized instance) and ensures proper late binding.

A question that was answered to me in this comment by Aleksandar Prokopec was: Why in line 22, the ret.apply() call is rewritten to ret.apply$mcI$sp();? The compiler acts pro-actively here by rewriting every apply call to the specialized one. The static type of the receiver directs the compiler to rewrite the call, but then the proper version of the method body is found via late binding on the actual instance that was passed (and potentially calling a specialized method to avoid boxing).

References


A more elaborate description of the initial design, implementation and semantics of Scala Specialization is included in Iulian Dragoș PhD thesis which is about Compiling Scala for Performance. Also, in Aleksandar Prokopec's blog there is an elaborate post about Quirks of Scala Specialization outlining several guidelines. Finally, I don't know what the current state of specialization is but back in the 2012, the rethinking specialization spawned the SIP: Changes to the Specialization Phase.

Monday 28 October 2013

Reified Type Parameters Using Java Annotations

My GPCE talk about @reify is over. One of the questions asked during Q&A was how is it possible to override the object's getClass() method to return correct information. If an instance is available and obj.getClass() is invoked (where obj is declared as T where T is generic type), then by employing only our proposed generation scheme getClass() won't provide the full type information. getClass() cannot be overriden and the answer to this is that method invocations of getClass() may also need to be rewritten in the AST level. Additionally, generic type information must be stored in the object instance level to make full reflection support possible to be provided. The class literal (.class syntax) on the other hand is valid only on types right now. With @reify if we get to write T.class where T is abstract, due to the .class expression that evaluates to the T class statically we can generate the proper code as we showed in the paper.

Thank you all for attending and also thank you for your comments.


And the poster that I presented in the SPLASH Poster Session (in pdf format):

Monday 21 October 2013

Attending SPLASH 2013

The following two weeks will be crazy. We are packing our bags to travel to Indianapolis, Indiana to present our current work and to attend the SPLASH 2013 conference. Our research team presents three papers at OOPSLA, one of which is our Forsaking Inheritance paper about a new design we proposed (we named it DelphJ), a Java-based object oriented language that eschews inheritance, in favor of class morphing and (deep) delegation. Our colleagues' papers to appear in OOPSLA are: Soundly Completing a Partial Type Graph and Set-Based Pre-Processing for Points-To Analysis.

Alongside with OOPSLA we present our preliminary work on introducing Reified Generic Parameters with Java Annotations at GPCE and at the Splash Poster Session. We introduced the concept of a new annotation that annotates generic types indicating them as reifiable and for those who haven't heard about type annotations before, this is the new cool extension to Java annotations that is specified by JSR 308 and will be included in Java 8.

See you in Indianapolis.

http://splashcon.org/2013/images/Splash_WebBanner-shadow.png 
GPCE'13

Tuesday 26 March 2013

Simple SSH/OpenVPN advice

One cannot do research effectively, if he is not aware of shortcuts in his day-to-day interactions with the university's infrastructure. These little stuff may seem not top-priority at first, but after taking some time to setup your environment, they will prove useful, if you will do work/projects from home, if you try to access the intranet to use your institution's IP for accessing academic material, or if you just want to use your institution's VPN network.

This is a super quick tutorial for Ubuntu. Not a PL memo, but still... :)

1. Every university provides vpn access. Mine provides access via an openvpn server and a default configuration file is provided (alongside with the needed ca cert). Instead of using a manual openvpn command, with screen or else, or pollute .profile etc, you can use the graphical network-manager-openvpn and import the provided ovpn file.

 sudo apt-get -y install network-manager-openvpn 

2. Import all the settings/hosts that you frequently type in an ssh_config file and place it in .ssh/config. In there, you can enter a whole bunch of settings, like more strict checking of known hosts, compression, forwarding, alive interval, or what identity files you will use for each connection (e.g., your git server), etc. Mine is the following:

Host linux*  
      User <yourname>  
Host linux01 linux02 linux03 linux04 linux05 linux06 linux07 linux08 linux09 linux10 linux11 linux12 linux13 linux14 linux15 linux16 linux17 linux18 linux19 linux20 linux21 linux22 linux23 linux24 linux26 linux27 linux28 linux29   
      HostName %h.<university's hostname>  

3. Use private / public key for ssh connections. You will need an ssh-agent installed. ssh-agent is a program that starts alongside with an X-session or a login session and loads your private keys in memory.

 ssh-keygen -t rsa  
 chmod 700 ~/.ssh  

Now check the .ssh directory for two files, one is the private key and the other one is the .pub key. What you want to do now (I assume you know the basics of public key cryptography :-D), is to load the private in memory and save the public key somehow to your remote host in order to be able to connect without password, but with an automatic public/private rsa key pair handshake, instead.

If you haven't saved the file in the default location (check if it is loaded with ssh-add -l) then you should communicate it to the ssh-agent with the command below (maybe also append it to the .profile too for your future reboots).

 ssh-add ~/.ssh/whereyousavedtheprivate &>/dev/null  

 ssh-copy-id -i remote-server 

Your .ssh/authorized_keys at the remote profile, also needs secure permissions.

That's it.

Tuesday 8 January 2013

The original notion of traits

In [1] Schärli et al. provide a simple and understandable point of view: "It should be possible to view the class either as a flat collection of methods or as a composite entity built from traits. The flattened view promotes understanding; the hierarchic view promotes reuse." The authors establish the (original) definition of traits with the list of points below.

  1. A trait provides a set of methods that implement behavior.
  2. A trait requires a set of methods that parameterize the provided behavior.
  3. Traits do not specify any state variables, and the methods provided by traits never directly access state variables.
  4. Traits can be composed: trait composition is symmetric and conflicting methods are excluded from the composition.
  5. Traits can be nested, but the nesting has no semantics for classes—nested traits are equivalent to flattened traits.

First of all, we can view trait composition as a way to complement single inheritance. Each trait is an independent collection of methods, that implements a certain behavior. These methods can depend on other methods / objects (following the general principle of reusability), thus traits can be parametric, by declaring what they require.  Let's view a class with traits. What is class? Class = State + Traits + Glue as the authors say. This means that a class can declare its state variables, can be extended with traits and these traits on the one hand can operate on class' variables via getters / setters and on the other can also call methods from other traits. Traits' composition is not an (total) ordered relation, thus order is irrelevant. The only thing that matters structurally, is that traits as a set, may or may not have conflicting features (methods). If traits had state then the diamond problem could arise very easily, something that is avoided with traits. Conflicting features can arise when various traits define two or more features with the same signature. Then the resolution must be made explicitly and the authors introduce the notions of aliases and exclusion. If conflicting methods arise between class (this class or a superclass) and some trait,  class methods take precedence over trait methods and trait methods take precedence over superclass methods. What is of great importance is that if you take a method in a trait, and the same method in a class is composed with the same trait, then the method has the same semantics (flattening property).

The authors present a use case of traits by refactoring the Smalltalk-80 collection hierarchy. They argue that collections have various characteristics; namely explicit ordering, implicit ordering, unordered, extensible, immutable, keyed etc. By single inheritance a programmer can provide a solution with code duplication or just by lifting everything up to the hierarchy and then throwing unsupported exceptions (effectively disabling the methods that are not needed). The collection was refactored to use 20 traits each one providing different behaviors and depending on others.
  1. Schärli, Nathanael, et al. "Traits: Composable units of behaviour." ECOOP 2003–Object-Oriented Programming (2003): 327-339.