Archive for October, 2009
For the better part of the weekend, I have been looking for the cause and possible solution to Error 31. I thought the technical department of my service provider would have a solution to the problem well at hand so I deferred to them. However, it turns out they don’t have any sensible solution at hand though I must admit the matter did receive quality attention – Thanks Francis.
It turns out the problem was with my computer and more specifically my Windows Vista OS of all the things that could have been the source of the error. I started scouring the web for possible solutions as well as following leads supplied by Mr. Francis. In all the materials I have read with regard to Error 31, it seemed like the error has something to do with the Remote Access Service (RAS) that runs on Windows Vista.
OS: Windows Vista SP1
Device: USB 3G HSPDA Modem (E160 – Huawei)
What Worked For me
I uninstalled Virtual PC which until this point have not interfered with the 3G HSPDA connection. More important to note is that I used Revo Uninstaller to remove Virtual PC which means that all traces of it were removed from the registry as well as the hard disk.
JPA Query Language
I don’t know if this series still qualifies as simple but I thought I add some basic information about queries in JPA. In part II we looked at the EntityManager and more specifically the simple operations that it enables like persist, find, merge, remove and refresh. In addition to these operations, JPA comes with its own query language that allows you to create custom queries over your data set.
JPA abstracts the developer and the application away from the details of how data is represented in the data stores (more likely a rational database) and this abstraction effectively marries the relational and OO paradigms. However one of the corner stones of the relational paradigm is its query capabilities which has so far been unmatched by any software paradigm to date. The query facilities in the OO model are limited in as far as handling a large amount of data. While there are attempts at developing ORDBMS (Object Relational Database Management Systems) data stores, these have never truly caught on in the enterprise and so the bulk of enterprise data remain stored in relational databases. With every other application build on top of a relational database, it becomes important to build query capabilities into abstractions layers such as the JPA.
The default query language in relational paradigm is the Structured Query Language or simply SQL. SQL has a number of standards defined which every vendor of a relational database implements in slightly different manner thus making it a tricky language to adopt as the basis of an abstraction layer like the JPA that is expected to work across multiple relational database products without resulting to expensive and complex workarounds.
The Java Persistence API Query Language (JPA QL) is the result of attempts to abstract the query facilities of a relational paradigm. It borrows from the EJB QL but also fix the weaknesses that have plagued EJB QL. The specifics of what was borrowed from EJB QL and what was fixed are beyond the scope of this post. JPA provides the ability to retrieve JPA mapped entities, sorting them as well as filtering them. If you are familiar with SQL, then you have some degree of familiarity with JPA QL as it is syntax is closely modeled on SQL’s syntax.
Specifying a Query
There are three main ways of specifying JPA queries:
- createQuery Method of the EntityManager: with this option you compose the query at run time and execute it there and then. The most immediate aspect of this approach to creating queries is that your queries are not checked/parced at deployment time which means that obvious errors are only discovered when the code is executed.
- Named Queries: Named queries are defined along with the corresponding entity beans. Several named queries can be defined for each entity thus enabling filtering and sorting using various properties of the entity. Unlike with runtime queries, these queries are parsed at deployment time which means that any errors are discovered before code is executed that depends on your named queries.
- Native Queries: this gives you the ability to define queries using SQL instead of EJB QL. You can create Native Named Queries as well.
The most common query operation is the select operation which returns all or a subset of records in the database. With JPA QL, the select operation returns mapped collection of zero or more mapped entities. The operation can also return properties of a mapped entity. A simple select query looks as follows.
Notice how you select from the entity and not from a table as you would in SQL but the syntax of the query is not different from what you would write using SQL. The query returns zero or many Hotel entities from the database. The Hotel entity was defined in the first installment of this demo series. The second query in the above sample selects a property of the hotel entity.
Lazy vs Eager Loading: FETCH JOIN
When you design your entity classes with associations and relationships, loading and accessing these relationships at run time becomes important. For example, a hotel has rooms and you can decide if you want the rooms associated with each hotel to be loaded when the hotel entity is retrieved (eager loading) or when you explicitly access (lazy loading) the associated rooms. During the definition of the association between entities you can declare whether you can lazy or eager loading but JQL also allows you to load the objects in an association.
With the above query, all the hotel objects returned will have their associated rooms loaded as well. This gives you eager loading without specifying it in the relationship between the Hotel and Room entities.
Filtering & Sorting
It is not always the aim of any data retrieval operation to return every last record in a database; some times we are interested in only a few of those records that meet a particular criteria for the purpose of our operations at hand. Within the context of the simple app setup for this series, we may just be interested in hotels that are in a particular town. The name of the town in question would form our filtering criteria. The sample below gives a JPA QL query that would enable us to retrieve a collection of Hotel entities that with a particular town property.
Once again notice the similarity to an SQL statement that would return rows that meet the provided sort and filtering parameters. So far these are just simple queries that don’t show much of JPA QL capabilities but a necessary step in appreciating how JPA QL queries are written.
Of greater importance is showing how these queries can possibly be composed within the context of Java code.
A further example of using queries to filter
Something that may be a bit tricky for first time users of JPA is composing queries using the LIKE operator to filter
Assume for a moment that you want a list of all hotels with a particular number of rooms (say more than 20 rooms for example) … here is how you go about formulating such a query:
This concludes this look at JPA QL. This is not a complete examination of the power of JPA QL but a glimpse at what is possible.
The expected shift of computer processing to even greater degree of parallelism has sparked interested in new ways of developing software that will take full advantage of the horizontal increase in processing power. The key area that has received the bulk of attention is programming languages and tools. In a many-core world (as opposed to what is now called multi-core), shared state becomes very tricky so most of the mainstream programming languages would be difficult to use in producing software. While almost all the mainstream imperative languages do have a library to enable the development of code capable of parallelism, most of these methods are not baked into the language and sometimes the initial design of the language itself gets in the way. In the design of most of the mainstream imperative programming languages, immutable data type are rare or sometimes completely non-existent all together.
Increased interest in functional programming languages have given rise to new languages that serve as an adequate bridge between the existing imperative programming mindset and the much needed shift to a world of parallelism. Functional programming is certainly not new as many of the techniques have been implemented in languages like Scheme, Haskel, Erlang amongst others. However, these languages and the ideas that they implement have largely remained in academic circles until recently when the software industry has taken a more proactive role to transfer the knowledge of academia to the industry. Programming languages like F# and Scala borrow heavily from the aforementioned pioneers of functional programming.
The newest in this growing list of new programming languages is Google’s Noop. The following is a description of Noop from the project’s web site:
… new language experiment that attempts to blend the best lessons of languages old and new, while syntactically encouraging what we believe to be good coding practices and discouraging the worst offenses. Noop is initially targeted to run on the Java Virtual Machine.
The basic assumptions in the design and development of Noop are certainly interesting. Integrating testing into the programming language can greatly improve code quality and making the language truly object oriented will improve its readability. I have found functional programming languages to have a pleasantly concise syntax that effortlessly achieves what would have required a ton of boilerplate code in supposedly OO languages like Java or C# which include primitive data types.
On the face of it, that is a rather silly question since within it lies the answer. Software is an important component of your computing experience – without it, you would not have a computer in the first place. However, having installed countless pieces of software of varying licenses, I have come to wonder what an End User License Agreement (EULA) really means.
It is a legal document as far as I can tell so the wisdom of putting it in front of a lay person to indicate (with a handy little button) acceptance or refusal seems rather illogical. I have done my best to try to go through some of these license but the legalese is just too convoluted to make any immediate sense. In a perfect would, you would retain a lawyer who would then break it down accordingly and explain to you what the license means and does not mean. The practicality of matching down to your lawyer every time you want to install a piece of software seems rather counter productive at the very least.
These licenses are an integral feature of proprietary software in that there is a real chance that you may be breaking the law if you don’t abide by the stipulations contained therein. As they like saying, I am not a lawyer but I would expect that for anything to hold in the court of law (more so the act of entering an agreement), the parties should understand what their respective obligations are. Without a lawyer present any chance of making sense of an EULA for a lay person is slim at best.
Once you have installed the software (after appropriately agreeing to the terms of the EULA), do you have ownership of the software that is currently installed on your computer? With a proprietary piece of code, you don’t own it of course hence the EULA is likely to explain that you are not suppose to revise engineer it or temper with it in any way. However, the EULA is likely also to stipulate the if you lose your precious data as a result of using the program in question, the producer of the software is not responsible. Such a situation makes you want to know why exactly you are paying for the software in the first place; for all intends and purposes it may not work as advertised and you have no legal recourse for any such harm that may have result from your use of the software.
And there are software manufacturers whose programs behave more like Trojan horses. You install a single piece of software from a company and the next time you are updating or perhaps even better the software you installed has an auto update feature which periodically checks for updates. Here is the problem, the update would also (in addition to suggesting the new release) install additional, unrelated software on to your machine. In a sense the original program acts like a gateway for the software manufacturer to invite even more software onto your hard disk.
This constant need to out do each other in order to gain the end user’s favor does essentially look remarkably like what a virus writer would do. I recently had to update the Windows Live suite produced by Microsoft and somewhere along the way, I checked a box that would allow me to change the home page and default search engine on my browser which in this case I assume (Since Windows Live is a Microsoft product) would apply to and only affect Internet Explorer. The default search engine for address bar search on my Firefox installation is Bing … no, I didn’t want Bing and there is no simple way of going about undoing settings change. In yet another trespass, I have a .NET plug-in for Firefox installed while in the process of installing something completely unrelated.
With increase competition and jockeying for dominance, major industry players are hacking each other to bits. Google’s decision to integrate their Chrome browser into Internet Explorer using a plug-in seems like a good move on the surface and understandably so but then again there are far greater implication of control and ownership with such a move. As Mozilla points out, it confuses the boundaries between where Internet Explorer is and where things happen because of Chrome’s extension. It is easy to get excited at the thought of Google putting its engineering prowess to work and bringing cutting edge technologies to the most dominant browser in the market but it has far greater implications than just new technologies. The very introduction of new technologies suggests that bugs will be discovered so keeping the boundaries between software components is good as this enables proactive management.
The Windows operating system has a number of utilities that have come up to address weaknesses in the manner in which the operating system runs and manages itself and the programs that has been installed on it. Recently, I had the misfortune of a failed installation – the installation process of a program stopped prematurely and this meant that the program’s uninstaller was not installed. This became a problem that could not easily be fixed using Windows Control panel because I was not able to remove the program. I attempt to reinstall the program in effort to get the uninstaller in place but I was not able to reinstall since the said program has been supposed successfully installed. Just deleting the program would be the most logical thing to do but traces of the software would still remain in the registry and hence lead to a slower system in the long run. This particular situation illustrates a very common problem with most software running on Windows: it is much easier to get a program installed than it is to get it removed/uninstalled properly. There are countless pieces of software that leave their skeletal remains on the hard disk and Windows Registry. Such sloppiness shows a disregard to respect the ownership of the computer hardware on which the software runs – including the operating system.
In closing, users will want and should get the latest and the greatest software available on the market but software producers need to allow users to kick them out of their hard disks and do so with finality and assurance that there are no skeletons left on the hard disk or the registry. Even more importantly, stop with the production of Trojan horses. The fact that I downloaded and use iTunes does not mean that I either desire or want the latest and great version of Safari.
Would you not prefer to have the tools to remain in control of your computer?
In modern times any discussion of open source is bound to stir up the most heated exchange of words, views, opinions and perhaps even insults. Yet what becomes obvious upon a closer examination of the debate is that the people debating the subject either take a narrow view of open source or perhaps just defend a smaller section of it. Increasingly, the debate surrounding open source and closed source is best understood and left as a choice that should be exercised in the presence of circumstance.
What gets lost in the middle of the flame war is the fact that open source is first and foremost a movement that is largely community driven and that espouses the sharing of effort thus requiring that the products of the movement be accessible to all members of the community. Note that such view of open source does not automatically suggest a particular preference and sole domination of IT professionals of varying skills and interests. The nature of the movement and participation in it can accommodate both individuals and large corporations alike.
The very nature of the movement does not require that organizations denounce any other ideologies that they may have so that they can leverage what the open source movement has to offer. Companies like Yahoo, Google and others are heavy users of open source but the largely open and free services that they offer are as proprietary as Microsoft’s Windows and Office Suites. As an example of Google’s proprietary holdings, the company issued a cease and desist order against a participant in the open source community build around Android.
Increased use of open source products to create services also makes the movement much more formidable compared to other competing ideologies – more specifically perhaps ideologies that may come into conflict with particular aspects of open source such as code sharing.
Any mention of the champions of closed source or proprietary software development would bring out Microsoft on top of the list but as a matter of fact Microsoft is no stranger to open source though it is certainly more openly opportunistic and no doubt looks out for its own survival as a money making venture. However over the years, Microsoft has demonstrated exceptional ability to emulate the advantages that naturally occur in the open source movement because of its participatory nature and community approach to software development. Windows 7 has been tested much more widely with Microsoft’s development team actively encouraging feedback so as to continue to make adjustments and improvements. Windows 7 is the most visible example of how Microsoft has managed to create a buzz around a release much earlier on than has been the norm. With most Microsoft products, CTPs (Community Technology Previews) have become much more common in recent years than earlier on.
The need to release software as open source is more often than note a strategic move that is aimed at commoditizing a market. I am not aware of any companies that create a unique or market leading product and choose to release it as open source. Open sourcing usually targets product that do not have market share as yet and/or whose creators are not able to product the support necessary to bring it to any appreciable level of dominance in the market. Once again this practice is used by companies that both espouse closed-source software development as well as those that rely heavily on open source.
Commercial open source is where the business should be and the money making opportunities will and should arise. One of the main feature of closed-source software is that the barrier to entry is usually high such that simple human ingenuity may not be enough to come up with something unique and different. With such barrier to entry open source becomes a fundamentally attractive option for governments whose aim and objectives will and should include the cultivation and development of a software development industry within their respective borders.
Some of the best know companies in the world at the moment had their start from universities and schools. Yes, Microsoft does offer access to their source code for academic research but how possible is it to come up with products and/or services that build on your knowledge, understanding and modification of Microsoft provided access to the said source code? This is perhaps one of the reasons why any of the more recent start ups tend to built their infrastructure on open source tools and platforms. Microsoft is certainly aware of this and have had a number of initiatives that are targeted at students to encourage them to build their businesses on Microsoft technologies and tools but such efforts will be limited by how well Microsoft can tap into and harness a sense of community and adventurous exploration of their platforms with the possible benefit of making it to the big leagues as has been proven by the current darlings of social networking that have been started at campus dorm rooms using available and accessible open source tools and platforms.
While Microsoft’s and other proprietary companies’ efforts to encourage students to look at their platform and build on it does offer the semblance of openness, they remain both myopic and deeply miss guided. Take an example of any third world country and pose this question: how likely is it for a country to have a Windows Kernel expert? It is not at all impossible but how practical is it to cultivate such level of expertise? How much effort would it take to nurture and grow a Linux/BSD kernel expert in any third world country? Given the nature of source control and management at a proprietary company compared with the source control in the open source movement, it would be obvious that an open source product is much more likely to spawn a lower level expert on the inner workings of any particular product or service.
In conclusion open source encompasses a lot more than just sharing code and the associated license that dictate how sharing happens. It is, at its core, a movement that can accommodate both corporations and individuals who can identify with the spirit of the movement and hence become members. As a movement there are various roles that require different skills hence everyone with a talent can contribute to the well being of the movement. Open source does provide the best opportunity for governments (third world countries to be exact) to cultivate a vibrant ICT industry within their jurisdiction.