Wednesday, September 4, 2013

How to disable IPV6 on Ubuntu Server 12.04

While deploying a couple of Ubuntu Servers, I ran into an interesting problem. The servers obtained an IPv6 address even though during setup I had specified an IPv4 address. Lets just say after I rebooted I could get online but the purpose for an IPv4 address was defeated since I could not see myself remembering what the IPv6 address was. Detailed below are the steps needed to disable IPV6 and setup a static IPv4 address on an Ubuntu 12.04 server. This should work if on both physical and virtual servers.

First things first we will edit the sysctl.conf file. in your terminal issues the following command:
sudo nano /etc/sysctl.conf.

This should open the file ... Add the following lines to the end of the file:
#IPv6 configuration
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Then ctrl + o to save
Then ctrl + x to exit

Now restart the sysctl service by doing:
sudo sysctl -p

This should display the attend lines ...

You can verify that IPV6 is disabled by running the command
ip addr

Now that IPV6 is disabled run the following command to setup a static address using IPv4
sudo nano /etc/networking/interfaces

The file should tell you that it is using an auto configured IPv6 address for auto eth0
Change the line that reads
iface eth0 inet6 auto

to match the lines below
auto eth0
iface eth0 inet static

Remember that the addresses used above are mine ... replace them with yours 

ctrl  + o to save 
ctrl + x to exit

Now reboot the server....
sudo reboot now

 When you are up and running again, you will have your server using a static IPv4 address. 

Friday, June 10, 2011

Technology’s promise: Technology moves online: the transforming power of information technology and e-commerce

If you ask a child who was born in the year 2000 or even the 90’s they might not know what a vinyl record is or if they do they might now have not seen it in use. Vinyl records have been replaced by CDs and DVDs, which are fast being replaced by Blu-ray disc and portable flash drives (my Samsung Blu-ray player comes equipped with a USB 2.0 port… who needs a cd??). Technology is advancing very fast and with this advancement there are related services, which are using this advancement for their own advantage. One of these services that are making the best out of information technology and its advancement is e-commerce.  Much as I would like to talk about the ways in which information technology is affecting e-commerce, lets first look at the trends, which are driving information technology today according to author William E. Halal (if you want information on the other thing just read the book- I did, you should too)

According to Halal, the technologies of the future are grouped into ten main concentrations: biometrics, wireless, web 2.0, entertainment on demand, global access, artificial intelligence, virtual reality, quantum computing, optical computing and thought power which is defined as the ability to control computers and transmit data simply by using brainwaves. From the above list, one can only imagine where the world would be in 20 years. At the same time one cannot help but wonder what humans will have to do since technology seems to be taking over, but we will get back to that at a later time.

The possibilities are endless according to Halal. We are going forward and there is no stopping for those who don’t want to join the train. I am particularly fascinated by one of the key areas listed by Halal that deals with global access. According to Halal, before this IT boom,  2/3 of the people living in the developing world or should I say poor countries (that is just wrong sir) have or did not have the ability to make phone calls with there only being 2 phones to every 100 people. Even in the United States, there used to be a time when only 60% of the population on owns a PC and even then it is not always the greatest or even then it is very likely not connected to the Internet.

Globalization and rapid IT developments is changing all that. Wi-Fi, WIMAX, HSPA, satellites, you name it they got it, is changing the way people communicate with each other. Huge companies are horning on this power and are now moving their operations to areas where they would normally not venture due to power telecommunications networks. I mentioned I was particularly interested in this idea of global access for one reason: for the last 2 months I have not been to work. Yes I still work but I am under no obligation to go there because my company has embraced the idea that “work is work as long as you are doing it”.  My location is irrelevant. Much as I am happy with the setup, there are a few places I would like to visit but I am held back by the lack of Internet access in this areas. I thus cannot wait for the day when I will be able to sit in my parent’s house in Mamfe, Cameroon and do my job and no one will know I am not in the country.

Things such as global access can and will only be a reality based on the development and deployment of wireless technologies such as the ones we listed above. This brings us to the idea that none of the technologies mentioned above can exist in a vacuum. Some if not all of the technologies mentioned above are highly dependent on each other and it is my belief that it is this dependence that leads to the rapid development of each one of them, a sort of symbiosis.  The technologies showcased by Halal are leading to shift in the ways in which business are doing approaching profitability as we have already said above.

By far the greatest forecast made by Halal in my opinion is the idea of “teleliving”. According to Halal we should see a shift from the “dumb” computer i.e that box that sits on the desk and only responds when you type or click using a keyboard or mouse. The computer of the future will use AI and true voice control to interact with humans. No longer will be have to hunch over the keyboard as I am doing now but we will be able to tell the computer what to do simply by talking to it like we do to our friends, peers, and colleagues. Halal believes that this will truly mark the evolution of IT from the days when it was telephone, then television and now or should I say then “teleliving”.

Super isn’t it? Yes it is. But it leaves me wonder. What is the place of humans in all this? Technology is automating all of these tasks that used to be performed by humans so what is the human supposed to do. There are a couple of possibilities here but none in my opinion is viable. For one thing we can all become programmers because with all of these computers doing all what we used to do we will need someone to program and repair them if and when they break and believe me you don’t want to be around when a computer malfunctions. The other option involves moving to the Bahamas or better still Hawaii and lounging on the beaches all day—if we don’t have any work do to since computers are doing all the work we might as well take the day off , did I say the day, how about taking the whole year off and why not the decade.

Wednesday, June 8, 2011

Friday, May 13, 2011

The future of computing....

Invisible computing

In almost everything we do these days, there is a computer involved. There is a computer that drives cars, one that makes coffee, and if you want one that cooks food, well you may want to wait a little longer though I would not be surprised if there is already one. It is this advancement in computing particularly personal computing that drove us to our futurist prediction: computing in the future or what one author describes as invisible computing. In his post eleven events, trends and developments that will change your life, Glen Hiemstra, (October 16, 2006) introduces the idea of invisible computing.

Invisible computing, or the replacement of computer hardware as we know it. In todays world the computer or at least the personal computer as we know it is either sitting on our desk or our laps and more and more in the palms of our hands…but that is another topic. Technology is advancing very fast and it is not hard to assume that in the future and by the future we are talking about 2020, we will arrive at invisible if not near invisible computing. What is invisible computing? This idea is funded on the premise that computing will shift from the hardware that performs computing tasks to the persons that perform the task. In this light Hiemstra believes that computers will all but disappear into flexible clothing and Nano paper screens.

In today’s computing environment, it is more about the technology that is needed to accomplish given task. It is about creating new and innovative UIs with the assumption that it allows humans to accomplish the tasks that they need. In the future, our focus should not be on the technology, it should be on what the user wants to do. What the users goal is should in my opinion be the driving force behind computers.  It is not hard to perceive that in the future we will have many a computer without having a computer…does that make any sense? Picture a child watching a movie in the year 2030 (they are watching a movie that was made in the 1990s) and the wife in the movie is complaining about the husband spending too much time in front of the computer… it is my belief that they will very likely not understand what this means. This incomprehension will be due to the fact that computers will not exist as hardware that performs a task but rather computers will be integrated into everything we do.

Did someone just whisper “farfetched”? We are almost already there believe it or not. My Whirlpool washer and dryer are programmable, my coffee marker is programmable. So how then is it hard to believe that in the future the computer will be there to assist us in the task that we need to complete but it will not be what we are used to. The Internet will be here and very likely will serve as a super machine that allows for ubiquitous computing. So what would be the driving forces that will make such a computer work?

Already, there is rapid development of in areas of light based computing, spintronic, nanotube and quantum computing. These developments are not only increasing the speeds at which computers respond but they are also allowing for rapid reduction in the size of computers and at the same time making these computers cheaper. It is not hard to imagine that in the future computers will all but disappear into the devices that we use everyday. There is even talk of things such as Nano paper screens and wearable computers as already mention by Hiemstra above.  The development of devices such as the sixth sense at MIT only leads us to believe that there are greater things in store for computers in the future. 

Another force that is sure to spur the development of invisible computing is our constant need for information. Humans by nature want information but are not particularly interested in how that information is gotten. UI designers are beginning to realize that the way humans interact with computers need to be reevaluated such that we do not approach HCI as what technology can do for us but rather what the user wants to do ….

Wednesday, May 11, 2011

Xmind, the collaborative tool... my review

Ideas are worthless if you cannot express them in a logical easy to understand manner. One should be able to picture or visualize where the idea is coming from and where it is going to or what exactly the thinking is hoping to achieve. Now assume that you are in a group and you need to brainstorm with others and these people are not necessarily in the same room or geographic location as you are: that is where xmind comes in.

Xmind is an open source brainstorming and mind mapping application that allows users to plot their ideas as they think about them allowing for visualization. Not only is it easy to use, it is also easy to export its output to PDF or Microsoft office. Thus users can create mind blowing imaginative logic or logical solutions to problems which they can then share with their peers thus cutting down on time spent  on brainstorming. Xmind also allows users or groups of users to manage given projects by its use of Gant charting which provides users with a general overview of their given projects.

Features that support innovation:
For starters, xmind is open source and runs on a multitude of platforms including but not limited to windows, mac osx, Ubuntu Linux, android and the ipad (it is still in development but we can dream cant we?). What does this mean for users? Users can develop their ideas using xmind no matter where they are and then share these ideas with their team members in a collaborative space.  This not only removes the need to be in the same collaborative space as the people we work with but it also removes time as a factor of collaboration as one can now develop ones ideas in Bangkok (don’t ask me why I chose that city) and share it with ones peers in Ngelemenduka (yes that is a real place…)

Additionally Xmind works seamlessly with other open source products that allow for brainstorming such as freemind and mindmanager. What this means that users are not tied down to using Xmind but rather they can use whatever platform they choose to logically map their ideas or brainstorm and this will be easily workable with those of us that choose to use Xmind.  For a project such as our natural language processor for mobile devices, this means that we can have our different players be they linguist, psychologist, AI programmers, hardware programmers and what have you, brainstorm using different platforms with neither time, place or equipment being holding up the work.

With that being said , Xmind has the drawback that you can only get some of these features to work by paying for the pro version but hey if you want quality you may want to shell out some dough ….

Thursday, April 28, 2011

Voice Operated computers thinktank using the delphi method

Voice controlled computing using AI.
In today’s computing environments, there is a lack of voice controlled computing devices that use AI to interact with the user. This is not to say that these devices do not exist but rather existing devices are lacking in the features that would be needed to make them truly voice only. Existing devices either must “hear” what is said or the user must use specific words or phrases to get the devices to do a task (with the majority of them only being able to take dictation or do searches).  For example on my Android powered phone, I must use pre defined words such as “call”, “Text”, “find”, “search” to get the phone to do any task. Additionally I must be very specific in what I want to get the phone to produce any kind of intelligent output.  Have you ever tried to use voice dialing on a blackberry? “Call mum,” phone responds with: “did you say call home?”  Now it is possible that the phone did not understand me because I have an accent but then again if this were truly AI based it should be able to understand me just as a human would (I am actually chuckling to myself since that last part is not really true since some people chose not to understand me… I have an accent, get over it).

For true voice operation computing devices should be able to understand or deduce commands no matter the words used or what order they are used. They should also be able to respond either by querying the speaker for additional information or speaking the requested information or telling the user that task has been successfully or unsuccessfully completed.  So for example if I say “ what is the weather forecast of Saturday?” my computer should be able to use AI to deduce that I need the weather forecast for Saturday and Saturday only. It should also be able to provide the same information if I only said “Saturdays weather forecast”. Getting computers to speak naturally has already been achieved at least on some systems (check out Alex on the mac…I was blown away).

Much as this is all fine and dandy in theory, there are several issues that need to be resolved or should I say dealt with before this vision can became a reality. Yes, a lot of work has been done on this subject and there is still a lot of work needed. What we do need at this time is not so much additional research in voice operated computing, or natural language processing or artificial intelligence but rather we need a paradigm shift  - a change in the way that we approach natural language processing and artificial intelligence. Not only do we have to reevaluate this, but we also need to switch gears in our understanding of the role played by natural language processing in scientific theory. Additionally there are many other areas that will also need changing (at least in the ways we think of them) because they too will be affected by the above mentioned paradigm shift.

I am sure that some of you reading this will fail to see the relevance or the need for such a system.  Well for you nay Sayers, imagine a blind person being able to use a computer just as you would but without the need for a keyboard and a pointing device but rather all they need is a microphone and speakers or in your case as a sighted person being in a car and needing some information on the fly. Would it not be fun if you could boot your computer and get the information you need just by talking to it and not only that you can talk to it in exactly the same way as you would talk to your peers or should I say the same way as you would command your secretary or personal assistant (assuming you have one). Actually, a system such as this eliminates the need for a personal assistant don’t you think?? I guess the question that arises then is how do we create such a system given that we are not experts in either the fields of voice recognition, Artificial Intelligence or Linguistics and for that matter psychologist (that is the different groups of people that will be involved)?

Did someone just say, “Use the Delphi Method”? That may just have been in my head but yes that is the methodology that should be used for studying or should I say finding a solution to a problem such as this one.  What is the Delphi Method? Good question. The Delphi method was developed as a means of seeking the opinions of experts to given problems without a need to have them in the same place at the same time. Cool huh!! The Delphi Method uses a group communication structure that facilitates discussions on specific task (, 2011). This method usually involves anonymity of responses, feedback to the group as a whole or individually while at the same time giving participants to withdraw earlier judgment calls (, 2011).  The Delphi method thus queries experts on any subject and following these the information is sent to all involved parties allowing them to reconsider their previous answers based on the responses of the others.

I guess the question on everyone’s minds at his point is: why would you use this method? What is so special about it that does not exist in the other methods (Nominal Group technique or PMI –Plus-Minus-Interesting). The beauty of the Delphi Method is that is very well suited for use in the discussion of questions or issues that must be tackled by a distributed group of experts – i.e. experts that are not located in the same area or field and cannot be logically brought together.  Additionally, the Delphi Method has the advantage that it can be used or should be used in situations that require that a consensus be reached. By far the greatest advantage of the Delphi method in my opinion is the fact that it is very effective when past data is absent. In our case, there is a lot of past data but this data and the relevant parties that need to extract and manipulate this data is by no means centrally located. 

Additionally and by no means the least of its advantages, the Delphi Method is very useful when forecasting of new technology is needed (, as is the case here.
Furthermore, the Delphi Method allows participants to remain anonymous which in turn has the advantage of reducing social pressures, personality conflicts and individual dominance issues ( The Delphi method also has the advantage of educating its respondents on all the diverse and interrelated parts of the issue or technology being investigated.

            Now to the downside (yes there is a downside to everything):
Much as the Delphi method is a great tool, it is not always great to use because for one thing, the results or consensus reached is the opinion of a select few which is by no means representative of the population as a whole (I am sure you understand this one …if you don’t there is a statistics class with your name on it).  The Delphi method is also lacking because it has a tendency of creating middle of the road positions just so a consensus can be reached. This tendency eliminates extreme positions on the right and left of the norm (  Finally and by no means the least of its worries, the Delphi method should not be used as the only forecasting tool in the box, using just it will only lead to skewed forecast. Phew!!! That was a long one…time for breather… breather taken lets continue...

We have thus far looked at voice-operated computing from the research standpoint since we have been concerned about how we will achieve this goal. Lets now look at some factors that will make or break this dream.
Forces For:
Did I already mention that voice-operated computing is not a new idea as it has existed for some time now. It is the implementation that is lacking. With that being said, technology is advancing rapidly and it is thus not a far fetched idea that one day we will arrive at true voice operated computing. This idea is backed by the massive amounts of research and research groups that exist within the field of natural language processing, linguistics and artificial intelligence (AI).
Today’s world is turning us into very lazy people (and I don’t mean that in a bad way). We want to be able to do things faster and with very little effort.  Typing will soon be a thing of the past (actually it is a thing of the past if you have the money to shell out for one of those dictation systems) but this goes far beyond that. This system in my opinion will lead to a new way of doing things, a new way of human computer interaction.

Forces Against:
I guess the greatest force that would work against this kind of system is the socio-technological constraints that are needed to build it. What do I mean by this? The saying “too many cooks spoil the soup” comes to mind. We have already ascertained that for a system such as this to work it would take more than just hardware and software programmers but also linguist, psychologist and what have you. With this multiplicity of players, each with their on biases and ways and means of doing things, one can only marvel at the chaos that may ensue.  In that same light, it is my belief that they reason we have not arrived at such a system yet is that there are way too many players in the game. Much as having too many players may be a good thing, it also means that there is a duplication or triplication of efforts.

Money, the root of everything evil rears its ugly head again. It is always fine and dandy to have people working on stuff but ideas such as these needs backing both financial and otherwise. The big boys such as IBM and Microsoft have funding programs for these kinds of ideas but then again it also means that they have control over how the projects are developed and implemented. Their open source counterparts on the other hand must contend with time available from volunteers, which is very often not forth coming. For an idea such as this one to fully work, there should be a financial backing with no strings attached allowing for full creativity of all parties concerned…. yeah that is going to happen.