Archive for category Architecture & Design
Information and communication technologies are increasingly seen as an integral component to development and global organizations like the world bank are doing more to support and seed the development of technologies that will empower the marginalized. However, post-conflict reconstruction remains a particularly challenging aspect of any community, society and country coming out of a war – more so when the way has persisted for decades.
In most of the cases, the main use of ICT would be in humanitarian assistance first of all as tends to happen at the end of a conflict however ICT in that sense is more likely to be used by international players in the humanitarian assistance field. However as time passes then the emphasis shifts from humanitarian assistance to providing more long term development support which often means that the government in question is increasingly become more capable in delivery services to its people.
Countries that have emerged from conflict provide unique challenges that are staggering in both their scale and immediacy. While it is often easy to see the need for services delivery, it is often harder to figure out the necessary support structures and mechanisms that need to enable a more transparent and accountable execution of service provision. ICT, like anything else requires an agreeable support system in place that can provide continued support and further development of the foundational pieces that have been put in place. Give someone a computer and sooner or later they are going to need someone who knows how to fix that computer – preferably without incurring the costs of a consultant.
Questions of reliable power supply become imperative in post-conflict areas and as such more often than not ICT can’t be implemented at scale and sophistication that would be possible in a conflict-free scenario.
From a systems design and implementation perspective, the challenge of ICT in a post-conflict situation becomes slightly different from the challenges faced by ICT in development; while overtime, infrastructural challenges are largely overcome questions policy, human resources and the development of the two will still affect the smooth functioning of ICT. The design of systems to cope with post-conflict reconstruction and eventual development need to take into account the unique challenges brought about by the end of war.
If you are a keen consumer of tech news and rumours then you are well into feeling buzz-word fatigue with regard to applications markets or stores or places for that matter. Every other day, it seems everyone is announcing a way for their end users to access applications from a centralized location, while also acting as transaction broker for developers who want to sell their applications. This is a model that has been popularized and successfully leveraged into the sale of millions of iPhones and iPod units by Apple. Now, every big player in the industry wants to create some kind of an App Store or some variant of the same model.
Interestingly, this is not a new thing at all – at least on the personal computer front, Linux and its many distribution have always come with a package manager that pulls down, as it were, whatever you want from a central repository; you even have the customization options to add additional repositories as your needs may require. Open source being what it is, didn’t push for the payment option of it which alas was perhaps one of those things that could have pushed the development model faster and farther. However, open source has a number of backers who may well benefit from this apps store craze. Having operated such a model since their inception, it is conceivable to think that Linux distributions like Ubuntu would have the requisite experience and expertise in managing an app store model as a way to earn revenue. More recent versions of Ubuntu have interestingly focused on making this particular variant of Linux more cloud friendly which would fit in the app store model since the cloud would form the foundational infrastructure to add more developer and consumer facing capabilities into the mix.
Within the last 24 hours, rumours of Windows 8 surfaced and one of the features rumoured to be in the works is a Windows App store. Motivations aside, but the more interesting question to ponder with regard to that rumour (if it ever comes to see the light of day) is: how would such a thing work in the Windows ecosystem? Windows is a versatile platform but one of the things that people have been used to doing is to hunt down software binaries to install on Windows. You either download them or buy them on a CD and then do the needful installation and then voila – you have your application. What kind of confusion would an app store course amongst more casual users of Windows. Windows has been popular in the corporate world and these are environment in which they exercise total control over the behaviour of the operating system then the question that would beg an answer is how does corporate IT deal with app stores? Where exactly do their policies go in such a mix?
I must admit that Microsoft is not without experience in managing such large scale deployments of software and their associated management though the experience that they have may not scale that well. As the overseers of the most widely used operating system on the planet, one must recognize the fact that distributing updates to millions of desktops and servers around the world requires administrative and organization capacity that would lend itself easily to an app store model. Contrasting this ability with what Ubuntu can brag about, then the only problem with Microsoft’s know-how is that fact that much of it is probably kept within Microsoft’s walls and/or require you to be more than just a casual user of the operating system. Distribution and deployment of Windows updates may not (at least initially) scale well to include third party applications that may have nothing to do with the core operating system in any case.
The apps store model does present a great opportunity for small and/or first time developers as the ability to reach a great number of users with a useful and critical application has become that much easier. While there are business advantages to the app stores, they also do raise the question of how to keep your wares up to date across myriads of app stores, may targeting different variety of consumers while maintaining feature and/or performance parity across all stores.
Most regulators work on the premise that the more there is competition in the market place, the better choices and better quality there are for clients. That notion has been going through my mind for the last couple of days as having too much choice can be debilitating. From a basic consideration, more choices ultimately does lead the manufacturer/provider to try to differentiate his product/service from his competitors so in the end the end users benefit. However, the struggle to improve the product is shared across all the competitors in the industry so it is reasonable to expect that all products that target the same space would sooner or later reach the status of being good enough for the customer.
It is at this point then that quality parity causes choice to become more of a big problem for customers than a blessing. Such a scenario is currently playing out with smart phones – the staggering number of operating systems is simply amazing. Most of the increasingly leading platforms – iPhone and Android – are adding significant features with every release. It is too early to proclaim Smartphone operating systems mature so it goes beyond copying what a competitor has implement but instead implementing features that make the operating system better at serving end users’ needs. On a feature for feature basis, there isn’t enough difference between an iPhone and Android device.
Sure, the iPhone has a more comprehensive support structure around with the iTunes and the content therein but then again such a support infrastructure is being setup around Android and almost every player in the mobile phone industry wants to have an app store.
The Android ecosystem interesting as it is virtually an attempt to replicate the Windows model on mobile devices – a single platform that is installed on hardware manufactured by different companies. If this analogy is taken further, you will note that on a purely functional basis in, there is not much difference between a Dell, Toshiba, HP, Acer etc laptops or desktops. So far the analogy has not played out that way since Google’s Nexus One didn’t do so well in the market while the Motorola Droid is by the far the most successful Android device so far with regard to the number of units sold.
Another side effect of competition is increased feature creep which ultimately makes a product remarkably more complicated by deluding its raison d’être. A good example in this instance would be Microsoft Office. Think of Microsoft Office 2003 which had the old school menu systems. In an application like Word, there is an incredible wealth of features and capabilities hidden in those menus and few people can reach to them quite easily. In my experience, I have come across word documents that have a table of contends that was constructed manually even though Microsoft Word has had the ability to automatically generate and keep a table of contends updated. However, because of feature creep more of the software’s capabilities are hidden from the user making their accessibility and continued use more complex even for easy task. Still on a related idea, the fact that reaching a feature needs the continued need to navigate a hierarchy of menus, means that the full power of the application is not made easily accessible for daily and repeated use.
Microsoft Office new ribbon interface was meant to bring more of the suite’s capabilities to the users’ attentions so as to enabled repeated and continued use of what the product offers. Has it worked? I am not convinced but in some ways it is an improvement over the old user interface.
In closing, the competition in Smartphone operating system need to be kept sane and focused on delivering real and effective benefits to users instead of just piling on the features which may eventually get in the way of a more enjoyable and productive use of the platform. I must admit that so far Apple seems to be working on this as a deliberate objective in the evolution of its operating system. On the hand, the fundamental model that the Android operating system follows makes it a tough sell to keep that level of focus and in the process ensure a more coherent platform experience for users, developers and network operators alike.
The expected shift of computer processing to even greater degree of parallelism has sparked interested in new ways of developing software that will take full advantage of the horizontal increase in processing power. The key area that has received the bulk of attention is programming languages and tools. In a many-core world (as opposed to what is now called multi-core), shared state becomes very tricky so most of the mainstream programming languages would be difficult to use in producing software. While almost all the mainstream imperative languages do have a library to enable the development of code capable of parallelism, most of these methods are not baked into the language and sometimes the initial design of the language itself gets in the way. In the design of most of the mainstream imperative programming languages, immutable data type are rare or sometimes completely non-existent all together.
Increased interest in functional programming languages have given rise to new languages that serve as an adequate bridge between the existing imperative programming mindset and the much needed shift to a world of parallelism. Functional programming is certainly not new as many of the techniques have been implemented in languages like Scheme, Haskel, Erlang amongst others. However, these languages and the ideas that they implement have largely remained in academic circles until recently when the software industry has taken a more proactive role to transfer the knowledge of academia to the industry. Programming languages like F# and Scala borrow heavily from the aforementioned pioneers of functional programming.
The newest in this growing list of new programming languages is Google’s Noop. The following is a description of Noop from the project’s web site:
… new language experiment that attempts to blend the best lessons of languages old and new, while syntactically encouraging what we believe to be good coding practices and discouraging the worst offenses. Noop is initially targeted to run on the Java Virtual Machine.
The basic assumptions in the design and development of Noop are certainly interesting. Integrating testing into the programming language can greatly improve code quality and making the language truly object oriented will improve its readability. I have found functional programming languages to have a pleasantly concise syntax that effortlessly achieves what would have required a ton of boilerplate code in supposedly OO languages like Java or C# which include primitive data types.
In the past I enjoyed the concept and practice of programming because it provided an opportunity to explore a way of thinking about a problem without the usual constraints that one may face in the real world. The greater challenge (hence satisfaction) is in defining a model that will account for any potential failures and still be able to accomplish its intended purpose. As time passed, I have come to focused specifically on design and the resulting architecture. Designing anything is a process of creating a model that can account for the solutions to aspects of the problem specified. That is reductive in and of itself but there are much more insightful aspects of problem solving that need to be taken into account in designing and developing a solution.
In any design effort, the ability to abstract from the problem remains imperative while the generally accepted adage of too much of [take-your-pick] is a poison applies, abstraction done right can provide a practical solution to a multitude of problems. Programming paradigms have always been about creating models that either provide a way for us to give instructions to computers or a way for us to describe the world in a manner that a computer can comprehend and hence process. Programming languages remain a way for humans (programmers, software engineers, etc) to interact with a computer – giving it instructions on what to do and how to handle the particulars of our reality. The models that are implicitly encoded into programming languages represent our thinking as far as the machine-like view of the world or bringing the machine closer to the way we appreciate the world.
What are generally referred to as low level programming languages were essentially intended to enable us to communicate with computers and as such they bare close relationship to the way in which computers operate. Think of the assembly language and how you program in it.
With time, additional abstractions were added that allows us to focus more on giving computers instructions as opposed to prescribing the manner in which the computer carries out our instructions. This focus on instructions gave raise to what are generally referred to as procedural programming languages in which the emphasis was on results of the operations that need to be accomplished. The ability to focus on what you want done and how it is achieved in steps, obviously led to a greater interest in using computers to carry out what are essentially repetitive tasks that could easily be encoded in a number of functions which can then be executed and produce the desired result (or report errors, if any).
This focus on the procedures that are needed to accomplish a task leads to a huge codebase that is both hard to maintain and/or evolve to meet new and/or changing circumstances. This great problem would apparently seem to come from the fact that the procedural way of software development, does not adequately account for how the real world operates. In the real world, things exist and operate as a single unit – there is no difference between what something is and what it does.
Personally, I get the impression that this is the time when programming became a bit more philosophical in a sense that there is a deliberate effort to model the world in terms of its nature and its essence. The nature of the world, describes what the world is: in OOP, this is simply described as the state of the an object which is typically denoted by properties/attributes/fields, depending on the terminology of your platform of choice. You may notice that the nature of objects so defined does not need to change in order to make things happen because OOP relies on message passing to get Objects with the appropriate nature to carry out the intention of their essence as defined by their nature (what you do is defined by your nature and your nature defines what you do).
While OOP allows for a better abstraction from the real world, the manner in which it has been implemented thus far has a serious short coming. All the OOP languages that I have come across are rather verbose as the design process need to describe any application elements of the problem space in code. With increasingly large programs, it comes much more challenging to maintain large programs or ensure that they are tested to the satisfaction of end users. So, testing frameworks have mushroomed around OOP languages such as Java with JUnit (among so many others).
For all intends and purposes, OOP still bears some lingering association with how a machine would go about processing instructions. The so-called Fourth Generation Languages (4GL) like the Structured Query Language (SQL) has shown us to go about expressing our intention to the machine and have the machine figure out the means of getting to our intentions or at the very least least as close to it as possible. The oft-referenced Moore’s law continues its march into ever more powerful machines albeit in a slightly different way. With powerful processors, driving our computers we do not have to be chained to the vagaries of machine type thinking.
Another more poignant point to consider is the increased use of computers for entertainment (gaming etc), business and socializing. The nature of the problems that face social networking applications are markedly different from what have faced businesses at the advent and development of the current mainstream programming language. A business environment invariably has some kind of structure around it which is encoded in policies, procedures, organization structure and the processes that the organization run. Starting from such a foundation, it is then possible to formulate a few procedures which can be executed at regular or ad hoc basis to great effect. However, consider the way in which social networking sites are used – a single person would have a Facebook account, a Twitter account, YouTube account in addition to web mail accounts. These applications have become people centric and the number of people involved an quickly become a challenge for social networking sites that have managed to garner a big enough following.
The social network craze reveals an interesting dimension of how programming languages have evolved over time. At the outside, a few academicians used computers to help with research and then the business world caught on and now we have to face the reality that perhaps programming languages need to be less rigid. Often when discussing IT related subjects, less rigid may easily lead to less secure though in this context less rigid but more robust would be the best outcome in the evolution of programming languages. Objects are good as way to model the world but they lack a certain degree of expressiveness in effectively illustrating and modeling the state of the world as a seen a person who cares more about getting things done and less about the steps taken to get to the end.
The beta of the next release of Windows has been making rounds and has garnered mostly positive reviews as a beta with most people having good things to say about performance. Windows 7 essentially addresses the short comings of Windows Vista and top on the list of Vista’s transgressions is the User Account Control (UAC) feature which was intended to make Windows more secure but it proved to be too zealous in its prompts for permissions. Changes in Windows 7 aim to reduce the number of prompts that UAC asks for but so far this may have led to a less secure configuration on the next release of Windows.
According to two Windows enthusiasts, the current configuration of UAC on the beta version of Windows 7 makes the next release of Windows vulnerable. One of these threats allows malware to turn off UAC. A nasty piece of code would take advantage of your Windows 7 box without any protest from your system. The second flaw allows malware to elevate its permission on the system. The details of the second exploit can be found here. It basically take advantage of the fact that processes that ship with Windows 7 are allowed to automatically elevate their permissions on the system without any UAC prompt. However it is possible to use a binary that ships with Windows 7 to launch a third party program which can be a malware thus allowing malware free pass into your system.
The incredible and perhaps scary bit of this drama is Microsoft’s response to these flaws: so far, the response from Microsoft is that these two issues are not flaws but are there by design. In what world does make it sense to insist that an apparent security vulnerability is there by design, unless the intention was to have a vulnerable design from the outset. I don’t buy that “by design” argument as it seem to be based on the fact that there is absolutely no way that malware can find its way into a Windows 7 system in the first place thus making it all right to make flawed design choices.
The reaction to and interest in Windows 7 has been phenomenal to say the least and personally I was impressed by the fact that Microsoft is getting the benefit of what comes as part of the open source software development: community support and involvement in software development. These two security issues were raised by Windows enthusiasts and raised using a beta release for that matter; the upside is that this gives Microsoft the chance to fix the vulnerability before releasing Windows 7. More importantly fixing the vulnerability would be important in cementing relationship between Windows hackers from the broader end user community and Microsoft such that cooperating towards securing Windows becomes an imperative of everyone in the Windows ecosystem.
UAC in Windows Vista was annoying but I have always thought I much rather get used to the annoyance of UAC than to suffer malware infestation which would dramatically increase the amount of time I spend baby sitting Windows. The UAC changes in Windows 7 are implemented to lessen the annoyance that was in Vista but there exists a real threat of these changes causing Windows 7 to become less secure. It is a delicate balance between security and usability and missing that balance can shape end user’s reaction to a product. How Microsoft deals with this so called by design flaw can possibly shape people’s attitude towards Windows 7. To Microsoft the more important question is how many people are willing to hang on to Windows XP because of the perceived vulnerabilities in Windows 7.
The browser is king in an increasingly web-centric world and I don’t mean web-centric in the manner of communication – email, chat etc (which are certainly important) but increasingly people are relying on web application to carry out LOB (Line of Business) tasks and activities. With that in mind, the browser has taken a central role in computing generally. Microsoft owns the browser market and has done so since it successfully trounced Netscape in the now infamous browser wars. However, the sweet taste of victory and perhaps also some arrogance and complacency about its market, Microsoft decided that the browser was no longer worth its attention (Just look at IE 6). As you may be aware Firefox changed Microsoft’s view of the browser and spurred them to put more resources into the development of IE which brought tabbed browsing to IE with the release of IE 7. Microsoft’s work is not done as yet since IE has never really supported standards other than what Microsoft thought was best. Don’t get me wrong, initiatives by individual players in a particular software category could lead to increased innovation in the category but alas that is a post for another time.
The follow up to IE 7 has just been released in the name of Windows Internet Explorer 8 Beta 1. This beta release is targeted at web developers and designers and includes ‘super standards’ mode that the browser uses by default. Not that there are super standards but the simple meaning of the super standards mode is that IE 8 will adhere to web standards more closely when displaying web content. The inclusion of super standards mode as the default rendering engine means some additional work for developers and designers who want their wares to use anything but the super standards mode. The additional meta-tag added to web pages to trigger non-super-standards mode rendering on IE 8 is a testament to the continue special treatment that will be lavished upon IE when developing and designing web sites. It is not all lost though since I believe that making super standards mode the default for rendering will in the long run result in web designers and developers producing more standards compliant web sites.
I just installed IE 8 Beta 1 and I am kind of impressed by the installation experience. The download size was not that big but then again this is still an early beta – who knows what will happen by the time IE 8 is ready for release. The installation went on without any glitch. I should mention that I decided to install the beta because of IE 7 emulation which ensures that I get IE 7 rendering when I need it (though it requires a browser restart at the time of this writing).
I chose not to accept the default settings for my personalization of IE 8 Beta 1 which meant that I got all the options that are there with regard to search engines, web providers etc (I was not aiming to produce a professional account of my installation experience 🙂 ). Of course IE 8 offered to make itself the default browser on my machine but that honor currently belongs to Firefox (though for some reason, I can’t get Google Desktop Search to use Firefox for display even though it is the default browser but I digress). IE 8 customization offered to import my settings from my “other” browser which in this case is Firefox. The interesting part of the customization was when IE 8 detected the Firefox extensions I have installed and offered to find similar extensions.
The search for IE extensions took me to Windows Market places (windowsmarketplace.com) which looks rather wrong with IE 8 super standards mode running. Notice how the web site’s navigation bar is out of place?