Archive for category Development

ICT In Post-Conflict Reconstruction

Information and communication technologies are increasingly seen as an integral component to development and global organizations like the world bank are doing more to support and seed the development of technologies that will empower the marginalized. However, post-conflict reconstruction remains a particularly challenging aspect of any community, society and country coming out of a war – more so when the way has persisted for decades.

In most of the cases, the main use of ICT would be in humanitarian assistance first of all as tends to happen at the end of a conflict however ICT in that sense is more likely to be used by international players in the humanitarian assistance field. However as time passes then the emphasis shifts from humanitarian assistance to providing more long term development support which often means that the government in question is increasingly become more capable in delivery services to its people.

Countries that have emerged from conflict provide unique challenges that are staggering in both their scale and immediacy. While it is often easy to see the need for services delivery, it is often harder to figure out the necessary support structures and mechanisms that need to enable a more transparent and accountable execution of service provision. ICT, like anything else requires an agreeable support system in place that can provide continued support and further development of the foundational pieces that have been put in place. Give someone a computer and sooner or later they are going to need someone who knows how to fix that computer – preferably without incurring the costs of a consultant.

Questions of reliable power supply become imperative in post-conflict areas and as such more often than not ICT can’t be implemented at scale and sophistication that would be possible in a conflict-free scenario.

From a systems design and implementation perspective, the challenge of ICT in a post-conflict situation becomes slightly different from the challenges faced by ICT in development; while overtime, infrastructural challenges are largely overcome questions policy, human resources and the development of the two will still affect the smooth functioning of ICT. The design of systems to cope with post-conflict reconstruction and eventual development need to take into account the unique challenges brought about by the end of war.

,

1 Comment

Apps Market/Store/Place Bug

If you are a keen consumer of tech news and rumours then you are well into feeling buzz-word fatigue with regard to applications markets or stores or places for that matter. Every other day, it seems everyone is announcing a way for their end users to access applications from a centralized location, while also acting as transaction broker for developers who want to sell their applications. This is a model that has been popularized and successfully leveraged into the sale of millions of iPhones and iPod units by Apple. Now, every big player in the industry wants to create some kind of an App Store or some variant of the same model.

Interestingly, this is not a new thing at all – at least on the personal computer front, Linux and its many distribution have always come with a package manager that pulls down, as it were, whatever you want from a central repository; you even have the customization options to add additional repositories as your needs may require. Open source being what it is, didn’t push for the payment option of it which alas was perhaps one of those things that could have pushed the development model faster and farther. However, open source has a number of backers who may well benefit from this apps store craze. Having operated such a model since their inception, it is conceivable to think that Linux distributions like Ubuntu would have the requisite experience and expertise in managing an app store model as a way to earn revenue. More recent versions of Ubuntu have interestingly focused on making this particular variant of Linux more cloud friendly which would fit in the app store model since the cloud would form the foundational infrastructure to add more developer and consumer facing capabilities into the mix.

Within the last 24 hours, rumours of Windows 8 surfaced and one of the features rumoured to be in the works is a Windows App store. Motivations aside, but the more interesting question to ponder with regard to that rumour (if it ever comes to see the light of day) is: how would such a thing work in the Windows ecosystem? Windows is a versatile platform but one of the things that people have been used to doing is to hunt down software binaries to install on Windows. You either download them or buy them on a CD and then do the needful installation and then voila – you have your application. What kind of confusion would an app store course amongst more casual users of Windows. Windows has been popular in the corporate world and these are environment in which they exercise total control over the behaviour of the operating system then the question that would beg an answer is how does corporate IT deal with app stores? Where exactly do their policies go in such a mix?

I must admit that Microsoft is not without experience in managing such large scale deployments of software and their associated management though the experience that they have may not scale that well. As the overseers of the most widely used operating system on the planet, one must recognize the fact that distributing updates to millions of desktops and servers around the world requires administrative and organization capacity that would lend itself easily to an app store model. Contrasting this ability with what Ubuntu can brag about, then the only problem with Microsoft’s know-how is that fact that much of it is probably kept within Microsoft’s walls and/or require you to be more than just a casual user of the operating system. Distribution and deployment of Windows updates may not (at least initially) scale well to include third party applications that may have nothing to do with the core operating system in any case.

The apps store model does present a great opportunity for small and/or first time developers as the ability to reach a great number of users with a useful and critical application has become that much easier. While there are business advantages to the app stores, they also do raise the question of how to keep your wares up to date across myriads of app stores, may targeting different variety of consumers while maintaining feature and/or performance parity across all stores.

1 Comment

Simple Java Persistence API (JPA) Demo: JPA Query

JPA Query Language

I don’t know if this series still qualifies as simple but I thought I add some basic information about queries in JPA. In part II we looked at the EntityManager and more specifically the simple operations that it enables like persist, find, merge, remove and refresh. In addition to these operations, JPA comes with its own query language that allows you to create custom queries over your data set.

JPA abstracts the developer and the application away from the details of how data is represented in the data stores (more likely a rational database) and this abstraction effectively marries the relational and OO paradigms. However one of the corner stones of the relational paradigm is its query capabilities which has so far been unmatched by any software paradigm to date. The query facilities in the OO model are limited in as far as handling a large amount of data. While there are attempts at developing ORDBMS (Object Relational Database Management Systems) data stores, these have never truly caught on in the enterprise and so the bulk of enterprise data remain stored in relational databases. With every other application build on top of a relational database, it becomes important to build query capabilities into abstractions layers such as the JPA.

The default query language in relational paradigm is the Structured Query Language or simply SQL. SQL has a number of standards defined which every vendor of a relational database implements in slightly different manner thus making it a tricky language to adopt as the basis of an abstraction layer like the JPA that is expected to work across multiple relational database products without resulting to expensive and complex workarounds.

The Java Persistence API Query Language (JPA QL) is the result of attempts to abstract the query facilities of a relational paradigm. It borrows from the EJB QL but also fix the weaknesses that have plagued EJB QL. The specifics of what was borrowed from EJB QL and what was fixed are beyond the scope of this post. JPA provides the ability to retrieve JPA mapped entities, sorting them as well as filtering them. If you are familiar with SQL, then you have some degree of familiarity with JPA QL as it is syntax is closely modeled on SQL’s syntax.

Specifying a Query

There are three main ways of specifying JPA queries:

  • createQuery Method of the EntityManager: with this option you compose the query at run time and execute it there and then. The most immediate aspect of this approach to creating queries is that your queries are not checked/parced at deployment time which means that obvious errors are only discovered when the code is executed.
  • Named Queries: Named queries are defined along with the corresponding entity beans. Several named queries can be defined for each entity thus enabling filtering and sorting using various properties of the entity. Unlike with runtime queries, these queries are parsed at deployment time which means that any errors are discovered before code is executed that depends on your named queries.
  • Native Queries: this gives you the ability to define queries using SQL instead of EJB QL. You can create Native Named Queries as well.

Querying

Retrieving data

The most common query operation is the select operation which returns all or a subset of records in the database. With JPA QL, the select operation returns mapped collection of zero or more mapped entities. The operation can also return properties of a mapped entity. A simple select query looks as follows.

SELECT h FROM Hotel h

 

SELECT h.name FROM Hotel h

Notice how you select from the entity and not from a table as you would in SQL but the syntax of the query is not different from what you would write using SQL. The query returns zero or many Hotel entities from the database. The Hotel entity was defined in the first installment of this demo series. The second query in the above sample selects a property of the hotel entity.

Lazy vs Eager Loading: FETCH JOIN

When you design your entity classes with associations and relationships, loading and accessing these relationships at run time becomes important. For example, a hotel has rooms and you can decide if you want the rooms associated with each hotel to be loaded when the hotel entity is retrieved (eager loading) or when you explicitly access (lazy loading) the associated rooms. During the definition of the association between entities you can declare whether you can lazy or eager loading but JQL also allows you to load the objects in an association.

SELECT h FROM Hotel h JOIN FETCH h.rooms

With the above query, all the hotel objects returned will have their associated rooms loaded as well. This gives you eager loading without specifying it in the relationship between the Hotel and Room entities.

Filtering & Sorting

It is not always the aim of any data retrieval operation to return every last record in a database; some times we are interested in only a few of those records that meet a particular criteria for the purpose of our operations at hand. Within the context of the simple app setup for this series, we may just be interested in hotels that are in a particular town. The name of the town in question would form our filtering criteria. The sample below gives a JPA QL query that would enable us to retrieve a collection of Hotel entities that with a particular town property.

//Filtering

SELECT h FROM Hotel h WHERE h.name = “Nairobi”

//Sorting

SELECT h FROM Hotel h ORDER BY h.name

//Filter and Sort

SELECT h FROM Hotel h WHERE h.name = “Nairobi” ORDER BY h.name

Once again notice the similarity to an SQL statement that would return rows that meet the provided sort and filtering parameters. So far these are just simple queries that don’t show much of JPA QL capabilities but a necessary step in appreciating how JPA QL queries are written.

Of greater importance is showing how these queries can possibly be composed within the context of Java code.

Query q = em.createQuery(“SELECT h FORM Hotel h ORDER BY h.name”);

List<Hotel> results = q.getResultList();

A further example of using queries to filter

Query q = em.createQuery(“SELECT h FROM Hotel h WHERE h.name = :hotelName”);

q.setParameter(“hotelName”, hotelName);

List<Hotel> results = q.getResultList();

Something that may be a bit tricky for first time users of JPA is composing queries using the LIKE operator to filter

Query q = em.createQuery(“SELECT h FROM Hotel WHERE h.name LIKE :name”);

StringBuilder sb = new StringBuilder();

sb.append(“%”);

sb.append(name);

sb.append(“%”);

q.setParameter(“name”, sb.toString());

List<Hotel> results = em.getResultList();

Assume for a moment that you want a list of all hotels with a particular number of rooms (say more than 20 rooms for example) … here is how you go about formulating such a query:

Query q = em.createQuery(“SELECT h FROM Hotel h WHERE size(h.rooms) > 20 ORDER BY h.name”);

List<Hotel> results = q.getResultList();

This concludes this look at JPA QL. This is not a complete examination of the power of JPA QL but a glimpse at what is possible.

10 Comments

Rising Functional Programming

The expected shift of computer processing to even greater degree of parallelism has sparked interested in new ways of developing software that will take full advantage of the horizontal increase in processing power. The key area that has received the bulk of attention is programming languages and tools. In a many-core world (as opposed to what is now called multi-core), shared state becomes very tricky so most of the mainstream programming languages would be difficult to use in producing software. While almost all the mainstream imperative languages do have a library to enable the development of code capable of parallelism, most of these methods are not baked into the language and sometimes the initial design of the language itself gets in the way. In the design of most of the mainstream imperative programming languages, immutable data type are rare or sometimes completely non-existent all together.

Increased interest in functional programming languages have given rise to new languages that serve as an adequate bridge between the existing imperative programming mindset and the much needed shift to a world of parallelism. Functional programming is certainly not new as many of the techniques have been implemented in languages like Scheme, Haskel, Erlang amongst others. However, these languages and the ideas that they implement have largely remained in academic circles until recently when the software industry has taken a more proactive role to transfer the knowledge of academia to the industry. Programming languages like F# and Scala borrow heavily from the aforementioned pioneers of functional programming.

The newest in this growing list of new programming languages is Google’s Noop. The following is a description of Noop from the project’s web site:

… new language experiment that attempts to blend the best lessons of languages old and new, while syntactically encouraging what we believe to be good coding practices and discouraging the worst offenses. Noop is initially targeted to run on the Java Virtual Machine.

The basic assumptions in the design and development of Noop are certainly interesting. Integrating testing into the programming language can greatly improve code quality and making the language truly object oriented will improve its readability. I have found functional programming languages to have a pleasantly concise syntax that effortlessly achieves what would have required a ton of boilerplate code in supposedly OO languages like Java or C# which include primitive data types.

Leave a comment

Programming Paradigms

In the past I enjoyed the concept and practice of programming because it provided an opportunity to explore a way of thinking about a problem without the usual constraints that one may face in the real world. The greater challenge (hence satisfaction) is in defining a model that will account for any potential failures and still be able to accomplish its intended purpose. As time passed, I have come to focused specifically on design and the resulting architecture. Designing anything is a process of creating a model that can account for the solutions to aspects of the problem specified. That is reductive in and of itself but there are much more insightful aspects of problem solving that need to be taken into account in designing and developing a solution.

In any design effort, the ability to abstract from the problem remains imperative while the generally accepted adage of too much of [take-your-pick] is a poison applies, abstraction done right can provide a practical solution to a multitude of problems. Programming paradigms have always been about creating models that either provide a way for us to give instructions to computers or a way for us to describe the world in a manner that a computer can comprehend and hence process. Programming languages remain a way for humans (programmers, software engineers, etc) to interact with a computer – giving it instructions on what to do and how to handle the particulars of our reality. The models that are implicitly encoded into programming languages represent our thinking as far as the machine-like view of the world or bringing the machine closer to the way we appreciate the world.

What are generally referred to as low level programming languages were essentially intended to enable us to communicate with computers and as such they bare close relationship to the way in which computers operate. Think of the assembly language and how you program in it.

With time, additional abstractions were added that allows us to focus more on giving computers instructions as opposed to prescribing the manner in which the computer carries out our instructions. This focus on instructions gave raise to what are generally referred to as procedural programming languages in which the emphasis was on results of the operations that need to be accomplished. The ability to focus on what you want done and how it is achieved in steps, obviously led to a greater interest in using computers to carry out what are essentially repetitive tasks that could easily be encoded in a number of functions which can then be executed and produce the desired result (or report errors, if any).

This focus on the procedures that are needed to accomplish a task leads to a huge codebase that is both hard to maintain and/or evolve to meet new and/or changing circumstances. This great problem would apparently seem to come from the fact that the procedural way of software development, does not adequately account for how the real world operates. In the real world, things exist and operate as a single unit – there is no difference between what something is and what it does.

Personally, I get the impression that this is the time when programming became a bit more philosophical in a sense that there is a deliberate effort to model the world in terms of its nature and its essence. The nature of the world, describes what the world is: in OOP, this is simply described as the state of the an object which is typically denoted by properties/attributes/fields, depending on the terminology of your platform of choice. You may notice that the nature of objects so defined does not need to change in order to make things happen because OOP relies on message passing to get Objects with the appropriate nature to carry out the intention of their essence as defined by their nature (what you do is defined by your nature and your nature defines what you do).

While OOP allows for a better abstraction from the real world, the manner in which it has been implemented thus far has a serious short coming. All the OOP languages that I have come across are rather verbose as the design process need to describe any application elements of the problem space in code. With increasingly large programs, it comes much more challenging to maintain large programs or ensure that they are tested to the satisfaction of end users. So, testing frameworks have mushroomed around OOP languages such as Java with JUnit (among so many others).

For all intends and purposes, OOP still bears some lingering association with how a machine would go about processing instructions. The so-called Fourth Generation Languages (4GL) like the Structured Query Language (SQL) has shown us to go about expressing our intention to the machine and have the machine figure out the means of getting to our intentions or at the very least least as close to it as possible. The oft-referenced Moore’s law continues its march into ever more powerful machines albeit in a slightly different way. With powerful processors, driving our computers we do not have to be chained to the vagaries of machine type thinking.

Another more poignant point to consider is the increased use of computers for entertainment (gaming etc), business and socializing. The nature of the problems that face social networking applications are markedly different from what have faced businesses at the advent and development of the current mainstream programming language. A business environment invariably has some kind of structure around it which is encoded in policies, procedures, organization structure and the processes that the organization run. Starting from such a foundation, it is then possible to formulate a few procedures which can be executed at regular or ad hoc basis to great effect. However, consider the way in which social networking sites are used – a single person would have a Facebook account, a Twitter account, YouTube account in addition to web mail accounts. These applications have become people centric and the number of people involved an quickly become a challenge for social networking sites that have managed to garner a big enough following.

The social network craze reveals an interesting dimension of how programming languages have evolved over time. At the outside, a few academicians used computers to help with research and then the business world caught on and now we have to face the reality that perhaps programming languages need to be less rigid. Often when discussing IT related subjects, less rigid may easily lead to less secure though in this context less rigid but more robust would be the best outcome in the evolution of programming languages. Objects are good as way to model the world but they lack a certain degree of expressiveness in effectively illustrating and modeling the state of the world as a seen a person who cares more about getting things done and less about the steps taken to get to the end.

1 Comment

java.lang.NoSuchMethodError: net.sf.ehcache.Cache.<init>

This entry is about the aforementioned exception which was thrown in an application I am working.

Environment

The application uses the following libraries

  • Spring framework (version 2.5)
  • Hibernate JPA (version 3.2.x)
  • Acegi Security

Cause

As indicated in the exception, the error is thrown by the ehcache library and in my case version 1.2.3. In simple terms the init method can not be found in the aforementioned version of ecache. The stack trace of the exception indicates that the absence of the method affected the creation of a cache for use by Acegi. The cache bean has been configured in Spring to allow it to be injected into Acegi authentication & authorization beans.

How did version 1.2.3 of ecache end up in the application’s class path? Well, that boils down to my recent decision to switch from using TopLink as a JPA library to using Hibernate JPA. The hibernate JPA in my IDE uses version 1.2.3 of ecache instead of version 1.3.0 which I also have in my class path.

Solution

The solution that worked for me (suggested in the reference below) was to remove the old version of ecache and using echache version 1.3.0.

Reference

Leave a comment

RAM Upgrade

I have always used computers with low hardware specification. I recall my first foray into programming generally and there were a host of limitations to overcome with regard to memory (RAM) and of course secondary storage. Those were the days when every computer you came across had floppy disk drive. What is more interesting is that at some point I always find myself out growing the available amount of RAM or hard disk space for that matter.

Usage

Up until last week, I had been using a machine with 1 GB of RAM which was adequate for most normal tasks but does not quite suit my needs. I have a bias towards web applications which means in a typical development scenario, I would have at least a couple of servers running including simple text editors as well as tools like Macromedia Dreamweaver (that is when I am working with PHP). Of course a browser or two (Firefox and IE primarily though increasingly Google Chrome) is usually running. For a PHP development session, there would be MySQL Query Browser which I find rather nice to tweak and test out my queries before I plug them into Macromedia’s wizards or before I put them into my own custom code if the wizards prove to provide too simplistic a solution. It goes without saying that for a Microsoft targeted web development undertaking then a combination of Visual Studio and SQL Server Management Studio would be running in addition to any reference material that may come in handy at the time.

Memory Greedy software

I am sure you have heard by now all the comments you can take about Windows Vista’s downside and I must admit there are some merits to some of those comments. The aforementioned usage scenario happen to be brought to life on a machine running Windows Vista with that 1 GB of RAM. Of course those in the know would promptly point out that 1 GB RAM is not enough but I can’t help but point out the Vista Capable fiasco going on; it is all over the internet and a quick Google search would produce some interesting material and a more entertaining look at the drama that can take place between some of the computer industry heavy weights. Generally, Windows Vista is a solid piece of work, regardless of what anybody would say but at the same time it does suffer from some typical annoyances as should be expected of any piece of software. With 1 GB of RAM, I have had only two occasions of my machine suffering the equivalent of a BSOD (Blue Screen of Death) though getting it to do anything in a hurry is close to expediting the second coming.

Another memory hog is Firefox; that browser just takes up too much memory and leaving it running as I am used to doing for days on end makes matters worst. The good thing is that if it crashes, there is every possibility to restore my opened tabs. Firefox 3.0 seems to handle memory better than FF 2.x but still it would be huge surprise not to find it the top of the list of processes (sorted by memory size) in the task manager.

The aforementioned development platforms of PHP and ASP.NET are generally manageable with the 1 GB RAM though appropriate time has to be allowed for mouse cursor to respond. There are times at which I prefer to code in Java and in my experience with the constraint hardware specs I have been using, it is not possible to code for a whole day using all the nice Java tools without forcefully killing a Java process or two. There was a time I wanted to get some hands on experience with JBoss Seam and the most straight forward way of doing this was to use JBoss Application Server (AS) in conjunction with JBoss Seam. JBoss AS could not start! The interesting bit of it was that I could not reliably run an IDE like Eclipse but Netbeans could be used though not for extended periods of time.

With such memory hungry software running on limited hardware, I would have to admit to the robustness of Windows Vista though of course there are limits to what the OS itself can do as a platform on which other pieces of software run. Hold on before, you start the protest about a good (but truthful remark) about Windows Vista, I am definitely looking forward to some of the short comings of Windows Vista being addressed in the up coming Service Pack 2 as well as Windows 7.

The Upgrade

I decided to upgrade my RAM … I have added 2 GB to the mix which will take the total to 3 GB. I am running on 32 bit processors (dual core) which puts my primary memory limit at 4 GB thus at some point in the future, I may decide to add 1 GB though I doubt that would happen because by that time, it would be prudent to purchase new hardware. So far the performance of the machine has been the most pleasant: there is no longer any noticeable moments to allow for the mouse cursor to react. However, these are just the beginning days as I am sure more demand will be placed on the additional RAM :-).

1 Comment