13 April 2014

Short guide to GDB

GDB is an extremely useful tool for development in *nix platforms. It is quite easy to debug a code that is failing, although it has so many different options that it may get difficult to use.

Note that I am deliberately not showing the source code of the sample before running the program, to demonstrate that we can understand most of what is going on just by using GDB.

Basic commands

To start gdb, just use the command line and pass your program as an argument. In this post I am going to use a very simple dummy program to manage post mailboxes written in C++.

 

This will show you the initial gdb prompt, with license and version information.

 

GDB commands can be typed in this prompt. Basic readline usage is supported (e.g, CTRL+R to navigate the history).

A list of basic GDB commands can be found here. A quick guide continues below.

At this point the program is loaded, but not executing. To run the program, use the “r” command. If the program has arguments, this is the moment to pass them (e.g “r param1 param2″). Do not put params to the program when running gdb! those will be accepted as gdb params…

 

The program runs as usual, and in this case it is waiting for a menu option.
We can stop the program at this point with CTRL+C, and gdb will show us where it is waiting:

 

In this case, the program was running an internal system function, so the information is not very useful. However, we can see the stack using the command bt:

 

We can see that 7 levels up our current point we were in the line 65 of our main.cpp file. Lets go up and see what is going on there:

 

So we are getting input from the user using std::cin. We can see the context in the code:

 

Note that if you type l again, it will show you the next 10 lines, not the same line.  We can also specify a line number to see around that line number instead of the current one.

We can set up now a breakpoint in the line after this one so we can see what the user input was:

We can also put breakpoints  in functions, class methods and in other many places using the same command.

To continue the execution, use the “c” command:

 

gdb has stopped our program in the next executable statement, just before executing it.

Inspecting the program

We can print values for variables or evaluate expressions with those variables:

While the program is running, we can list local variables as well:

Here we see our op (option variable) and an array of menu_options, which are function pointers.

The declaration of menu_options can be inspected from within gdb:

This basically tell us that menu_options is a 4-element array of function pointers that do not have arguments and do not return anything.

We can see a list of functions in the program with the info functions command but that will show all functions, including those in the system library. It is better to use a parameter show those functions matching a particular regular expression:

We now they are declared on main.cpp, so we can list their source code

Breakpoints

We can put breakpoints in these functions (b add_mailbox) or in a line of the function (e.g b main.cpp:103)

Breakpoints can be saved and restored from a file

Breakpoints can be listed and deleted:

 

Graphical interfaces for GDB

A course-based interface is available at no extra cost, just run gdb -tui. More information available here.

mailbox.exe sample code running inside the gdb curses interface

mailbox.exe sample code running inside the gdb curses interface

 

DDD (Data Display Debugger) is also widely available in many Linux distributions. Can visualize the source code, set visual breakpoints and help visualize data structures with a graphical layout of printed variables, complex data structures and pointers.

DDD displaying the debugging of a random point within the mailbox application

DDD displaying the debugging of a random point within the mailbox application

If you use Eclipse, CDT also supports GDB integration.

Finally, there are some nice plugins to integrate gdb within VIM, but that deserves its own entry :-)

Comments Comments | Categories: Development, linux | Autor: Ruyk




22 April 2013

accULL

The main result of my Ph.D dissertation (whose slides you can get here, and the text in this link: Directive Based Approach to Heterogeneous Computing) was accULL, an implementation of the OpenACC standard.
This implementation is based on two pieces of software I designed, YaCF (Yet Another Compiler Framework) and Frangollo.
YaCF is basically a Python StS toolkit, heavily based on the pycparser project. It uses the C99 Frontend with some extensions to generate an Internal Representation (IR) of the source code. Then, it is possible to implement code transformations (named Mutations) to transform the IR onto a different code. The pieces of code to be transformed can be found using Filters. After the transformation is ready, it is possible to write back the C code using a Writer class.
Frangollo is a runtime capable of handling the execution of code on accelerators (e.g GPUs). It is written in C++, with CUDA and OpenCL components to support execution on both platforms.
Both pieces of software are very flexible, and based on a layered design. This facilitates working on a particular layer without affecting the other pieces of the package, hence allowing the extension of the software architecture with a low development effort.
Implementation details can be found in the dissertation itself.

The first version of accULL released (0.1) covered the basic implementation and offered support for both CUDA and OpenCL platforms.
We have released version 0.2 (downloadable here), which is more stable and offers improved support for the parallel, if and update directives. It also contains a set of validation tests (currently around 20), a testing script and a wrapper to the YaCF driver that facilitates the compilation of the source. Details about this release can be found in the accULL blog, and my article about OpenACC and accULL have been published in the EPCC web-page.

Working in accULL has been very instructive and interesting. I’ve learned many things,  compilers, runtimes, C/C++, Python, CUDA, OpenCL and many many other things.
However, the major outcome of this work – at least for me – has been having the opportunity of working with so many different people, and in particular with the peope at La Laguna – The “boss” Kiko and the students Juanjo, Lucas and Ivan. Their untiring enthusiasm has forced me to keep me up-to-date with the project (even after finishing my Ph.D). In order to answer their questions, I’ve been forced to properly think several ideas that otherwise wouldn’t be fully developed. Many thanks to all of them.

Since I am no longer full time on the project, the role of my involvement has necessarily changed. Juanjo and Lucas are progressively taking over the development, whilst I am moving to a “management” role. I’ll explain how the management of the accULL releases works once the next release date is set. I can tell you that my loved Kanban is involved!

Comments Comments | Categories: hpc, Uncategorized | Autor: Ruyk




4 April 2012

Converting PostScript files

Converting PostScript (ps) files is necessary sometimes, particularly if you want to produce high quality graphics for LaTeX or you have used the “print to file” option of some programs.

In Linux, the easiest way to convert this kind of files is to use the convert command, from the ImageMagick package. For example, converting a PS file to a JPEG file is as easy as typing this command in the console (assuming you have already installed the aforementioned ImageMagick package):

Better quality images can be generated using GhostScript (gs). For example, the same convertion produces (at least for me!) better quality jpeg images:

Notice the -sDEVICE parameter, which specify which GV device we want to use and the -r300, which specify the desired resolution (in dpi). -dNOPAUSE and -dBATCH enables the command to be used inside a batch script without requiring user intervention.

They are several devices available, some of the most useful for me are:

  • png16m: To convert PS files to PNG
  • tiff24nc: To produce non-color tiff files, useful to directly print in several devices
  • epswrite: To generate EPS files

Sometimes the original file has excessive blank space. It is possible to use the -dEPScrop  option to remove the margins and try to fit the paper size to the image.

In case the file contains text,  is it possible to further increase quality of the resulting file by using the -dTextAlphaBits=n (where n is 2 or 4) to apply antialiasing to the fonts.

Comments Comments | Categories: Uncategorized | Autor: Ruyk




31 August 2011

CUDA + Nvidia Optimus in Ubuntu 11.04

Some days ago I bought a new laptop, a Samsung Q330 with an NVIDIA Optimus Card. After installing Ubuntu 11.04, I realized that 3D acceleration and CUDA was not available despite of installing the closed-source NVIDIA kernel driver. Then I realized that NVIDIA Optimus is not fully supported in Linux (my fault for not checking it first!), so I started googling to see if there was some info available. It turns out that an open source project called bumblebee enables the usage of both NVIDIA and Intel IGP card with the proprietary drivers.

Installing bumblebee in Ubuntu 11.04 is easy: just add the ppa

and then install the bumblebee package

How it works

In addition to the main X screen associated to the first available PTY of your system, bumblebee will create a secondary one, this one with the nvidia driver. Whenever system requires to use this device, the program will be run on the secondary X and the output will be transferred to the main X screen. Bumblebee allows to configure different X transfer methods, being Xv the default (and the one I am using).

Using CUDA

As usual, to be able to use CUDA in your system, you need to install the correspondent NVIDIA CUDA driver. Traditionally, I used to install it by hand. However, manual install does not seems to work well with bumblebee. There is a PPA for CUDA available for Ubuntu that worked well for me. Installation instructions are available in this other blog, although I will put an abstract here for reference pourposes:

Notice how OpenCL packages are also installed. As stated in the original post blog, additional packages might be required to compile the NVIDIA GPU Computing SDK.

After restarting your gdm, if everything was OK, you will be able to run your CUDA programs. If at first didn’t work, try using optirun:

This will tell bumblebee to force execution of the program in the GPU Device (by using the secondary X server).

I have not tested the OpenGl / CUDA interoperability with this system, although it should work properly. Performance however might be degraded due to the framebuffer transfer.

 

Comments 6 Comments | Categories: Uncategorized | Autor: Ruyk




10 May 2011

Measurement of CUDA programs

During the last month, while preparing a paper for JP2011, engineers from the SAII (A computer research support unit from University of La Laguna) did interesting research about CUDA benchmarking. Although it might be seen as a trivial task, it required some extra effort. In colaboration with people of our research team (GCAP , High performance computing group of La Laguna University), Iván, one of the engineers involved, has made some interesting notes about this issue in his blog. If you are interested in CUDA, and you need to measure execution times with detailed information, you should take a look into this two articles:

 

Comments Comments | Categories: Uncategorized | Autor: Ruyk




23 November 2010

HOWTO Profile OpenMP with TAU or OmpP

Introduction

It’s been a while since my last post, but I’ve been working a lot (too much
maybe) recently. As research staff in the TEXT project[0], I am
deeply involved in HPC programming environments.

This post pretends to be a quick help to the (few) people who wants to
instrument OpenMP programs for profiling. We will use TAU and OmpP.

more…

Comments Comments | Categories: Uncategorized | Autor: Ruyk




7 November 2009

Split a Latex Beamer file by frame

Sometimes, when a presentation is getting too big for a file, I try to split the frames on several files, usually one file per slide. Being tired of doing this by hand, I’ve written a Perl script for doing that:

Filenamize translates the frame title to a string suitable for naming a file

It’s only a quick-solution, but maybe it’s useful for someone over there…

Comments Comments | Categories: Uncategorized | Autor: Ruyk




7 August 2009

pssh-copy-id

ssh-copy-id[1] is a well-known command for system administration, specially for those deeply involved in the clustering field. It is common, in a cluster environment, to use ssh keys instead of  passwords on multiple machines, so we can move from one machine to another without the need of typing a password. Even you can use a key for limiting the access of the user to an specified command, instead of allowing the user to spawn a full shell (as you may see in [1] or [2]) .

SSH key pairs are composed by two keys: the public key and the private key. For ssh keys to work, you will need to publish the public key on the remote machine, so it can check if you has the correct private key when accessing. Never publish or make public your private key, as this is an enormous security risk. To publish without risk, you can use the shell script ssh-copy-id, supplied with the openssh package, that will connect to a remote machine  and write the public key.

When you have multiple machines, as in a cluster environment, you need to publish your public key in multiple places. The first approximation may be writing a for script with the mentioned ssh-copy-id, but this faces an awkward problem: Either you have to type multiple times the same key, or you have to pass it insecurely, for example:

for host in host1 host2 host3; do

yes ‘MyPassword’ | ssh-copy-id $host

done

Using this way, your password is exposed to all users (just by issuing a ps command), so it is not a Good Practice.

PhD. Casiano Rodriguez Leon, exposed this problem to me during one of my PhD Courses, and suggested doing a Perl script to make this publishing key issue faster and more secure.

After some work, We’ve come up with a solution called pssh-copy-id, which is a perl script / library, published on google code [4]. We hope to refine and clean the code, so it would be accepted on CPAN and freely available to all the community.

pssh-copy-id, currently work-in-progress, enables to use a syntax similar to ssh-copy-id to publish the key, for example:

$ pssh-copy-id  host1 host2 host3

Will ask four your password (assuming is the same for all of the hosts) , and publish the default key on all the machines listed. The password won’t be exposed in any way*. In addition,pssh-copy-id will check if the key has been already published, in which case it won’t be repeated. Also, pssh-copy-id supports host without password, in example, a host with another key published.

Currently, pssh-copy-id supports also the host definition syntax of net-parscp[5], which allows us to use regular expression to define hosts, the same command as before could be written like this:

$ pssh-copy-id host1..3

Future versions of pssh-copy-id will make the process of key publishing parallel, by spawning one process by host, so the process of publishing the key to several host will be faster.

This utility could be a quite useful tool for system administrators, that will enable them to publish and distribute keys faster, or integrate on bigger a script (like we’re doing on the SAII) to simplify the user key distribution problem.

Notes:

* except maybe under a process memory dump, which has not been tested

References:

[1] http://linux.die.net/man/1/ssh-copy-id

[2] http://oreilly.com/catalog/sshtdg/chapter/ch08.htm

[3]http://blog.ganneff.de/blog/2007/12/29/ssh-triggers.html

[4] http://code.google.com/p/pssh-copy-id/

[5] http://code.google.com/p/net-parscp/


Comments Comments | Categories: Uncategorized | Autor: Ruyk




14 November 2008

Some notes about Django

WARNING!: This post has been written in English, because I want to practice. If you find some mistake (orthographic, grammatical, whatever), please, don’t laugh and tell me where is it. Thanks for your collaboration!

In the SAII[1], we are currently developing applications using Django Framework[2]. Django is an open source application framework, written in Python. It follows the model-view-controller design pattern (MVC[3]). We chose Django because it’s a powerful environment that allow us to quickly pass from the UML class diagram to the model structure, and after that, it’s only a matter of creating the admin interface with two or three simple calls. This offers you a quickly starting point, and, if you are writing a small application, you’ll only need to customize a few aspects of the admin interface.

Django Logo

Django Logo

Recently, Django has reached the 1.0 version. This version is a huge milestone for the Django team, who started three years ago with only some code lines. Now Django have more than 4000 code commits, 40.000 lines of documentation and a really active comunity supporting the proyect.

However, perhaps you’ll need to use some features that are not in the 1.0 version. If that is your case, you’ll need to use the Django subversion[4], which is the easiest way to get the last version of the proyect. This version is usually  stable, and you can use it without too much trouble. But be careful: Sometimes the trunk version will have some api changes, that could break your application, or third party applications. So, if you want to develop and application, and don’t need the cutting-edge version of the subversion, I would recommend you using the stable release, 1.0, or the next 1.0.1, which contains some bugfixes.

In the SAII, we are in a dangerous position right now, because we need some third party applications (django_xmlrpc and tagging) that need features of the subversion version. This has lead us to some problems, which we need to fix manually, “diving” into the Django code with the debugger. All of the cases were solved by updating the django trunk or the third party application package. Our plan at this moment is to pick the current trunk revision, and use that as our “stable” revision, freezing the state of the framework until we’ll need some extra feature.

In conclusion, Django is a really good framework for web applications. Don’t hesitate to use it in any size projects. One of the applications we developed was used by 250 users simultaneusly during a short time period, and it worked flawlessly. I’ll try to run some benchmarks with the new application, which will have many more users, to measure the charge on the system.

[1] www.saii.ull.es

[2] www.django.com

[3] http://en.wikipedia.org/wiki/Model-view-controller

Comments Comments | Categories: Uncategorized | Autor: Ruyk




1 May 2008

Factorías de clases en Python

Hay veces que necesitas crear en tiempo de ejecución clases nuevas (Ojo, no nuevas instancias de una clase, sino clases en si mismas). Es útil cuando las clases se tienen que generar en función del contenido, o cuando hay que generar muchas clases parecidas y no queremos estar escribiendo.

Un ejemplo es el framework para aplicaciones web Django. En este framework, se pueden generar fomularios HTML automágicamente, por ejemplo, para crear un formulario html que permita introducir los datos del modelo Tutu, simplemente hay que hacer una clase del tipo:

El problema surge cuando tenemos muchos modelos y tenemos que generar autoformularios para todos. Si tenemos que hacer esta clase para cada uno, se nos hace el código muy grande, y tenemos que trabajar con un esquema fijo de urls por ejemplo.

En este caso, una mejor aproximación es utilizar el concepto de Factoría de Clases, es decir, una función (o clase incluso) que genera nuevas clases en tiempo de ejecución. Es posible hacerlo en varios lenguajes, por ejemplo Perl, pero la belleza de la sintáxis de Python lo hace realmente sencillo y claro:

El código es muy pythónico y claro, pero aún así lo cuento un poco. Como se dice en la docstring, la función genera clases de formulario automáticamente en función del nombre del modelo que queramos. Por ejemplo, si tenemos un modelo que se llama Tutu, esta función devolverá una clase que genere un formulario, y que excluya tres campos que no queremos que aparezcan.

Para ello, primero obtiene la referencia a la clase cuyo nombre tenemos en una cadena. Se podría hacer con un getattr directamente, pero utilizando la función getModel se realizan ciertas comprobaciones de seguridad que no vienen al caso. Luego comprobamos que la clase devuelta no sea nula, porque en ese caso no existiría el modelo del que queremos generar el formulario.

Aquí es donde aparece El Truco. Creamos una clase, con un nombre cualquiera, con la forma que queremos utilizar. Ésta clase tiene ámbito local a la función, así que sólo existe (a priori) en la función local. Aprovechando que tenemos la variable modelClass, fijamos el valor del atributo model de la clase Meta a modelClass, para que la clase nos genere un formulario modelo de la clase. Lo que hacemos finalmente es que la función nos devuelva una referencia a la clase que hemos creado. Como existe esa referencia, la clase sigue existiendo (recordemos que al ser un lenguaje interpretado, las variables existen mientras hayan referencias hacia ellas), y la podemos utilizar como una nueva clase desde el lugar donde hayamos invocado a la función.

En el momento de invocar a la función y obtener la clase, podemos crear una instancia de la misma, como por ejemplo:

Como hemos visto, una factoría de clases es una forma muy cómoda de gestionar la creación de muchas clases similares, ahorrándo código y tiempo, sin poner en peligro la legibilidad.

Algunos enlaces para ampliar:

[1] http://www.ibm.com/developerworks/linux/library/l-pymeta.html?S_TACT=105AGX03&S_CMP=ART

[2] http://rgruet.free.fr/PQR25/PQR2.5.html

Comments 1 Comment | Categories: Uncategorized | Autor: Ruyk