Friday, February 22, 2008

The race for dvcs

Hi all,
It seems every project worth its name is racing towards having some sort of dvcs (distributed version control system) . To understand the difference between now & where they want to go is to know where they stand at the present. Most of the old school SCM (Source Control Mechanisms) were of the time where contributors were living nearby & the contributors interested in the project were not many. Also issues of control were of paramount importance.
Today however, the situation has changed, more & more people are contributing to FLOSS projects & many of them might not have the best of bandwidth available to them most of them. This is where dvcs come in, one can checkout the whole tree & do their playing around for extended times off-line. Whenever they connect they can send their changes to everyone who is subscribed to their branch as well as reconcile their stuff with everyone else's . So if 2-3 people have an idea & want to run with it can do in this new era, rather in the old way. Also all of them are pretty fast.

Ok. now for the tools, there are a whole bunch of them, bzr, mercurial, git, svk, monotone are the ones in the know & perhaps a few more which I'm not aware of.

As far as projects are concerned, they are also moving but the one which I'll be glad when it moves to using one of these tools is mozilla,

Just a small list of projects either using or at planning stage to use these dvcs (the list is bound to grow) :-

1. OpenJDK
2. Openoffice.org
3. Mozilla Firefox
4. Ubuntu
5. Linux Kernel
6. Fedora

and so on & so forth. The list is by far in-complete & there are many a projects which are moving to one or the other dvcs tool . Its the workflow which will tell them which tool to use. The whole point, people are moving to a better way of doing things, so be there or be square ;)

Monday, February 18, 2008

Gecko vs Webkit

Hi all,
I had a crash happening with almost any & all are mozilla-based browsers. On advice I installed konqueror & used that as that would give indications as to what might have gone wrong. Lo & behold it worked. On further investigation it was narrowed to the underlying gecko engine which was at fault. So the next thing I had to know is why konqueror worked while all the mozilla-based browsers didn't. Found out it uses an another engine called webkit which is different than gecko (the engine used in mozilla & its derivatives). I then also decided to check out the various things which were/are being said, some now, some which were said some time earlier. One of the links which stuck me was this one . Nothing new was being said that was not said before but still it impressed me. So I'm one happy customer :)

DTH Operators defying Interoperability clause

Hi all,
First take a look at this news item . For reference putting the same here :-

Mumbai: DTH providers Dish TV and Tata Sky have been playing around with the TRAI’s (Telecom Regulatory Authority of India) interoperability guidelines, which require that the DTH operator must provide a set top box technically interoperable among different service providers.

The set-top box should also conform to standards laid down by the Government. At present, these standards prescribed by the Government incorporate, among others, MPEG 2 compression format.

Now there is resistance to a revision of standards suggested by Trai as well.

R.N. Choubey, advisor (B&CS), Trai said, “We have recently recommended to the Government that the standards should be revised to permit mpeg4 set top boxes. These would be compatible for mpeg2 transmissions anyways.”

As of now, technical interoperability hasn’t taken roots as hardly any consumer has reportedly decided to switch service providers. A reason, according to Trai, is that the prices of CAM (conditional access modules), which are needed to be plugged in the existing set top boxes slot are too expensive (almost as much as the set top box). The CAM is not provided by the operator.

Trai suggests the revision of standards be implemented prospectively and apply to DTH subscribers enrolled after six months from the date of such revision. Such revision should not compulsorily require the DTH operators to upgrade the STBs of existing subscribers to conform to revised standards, though they would be free to do so on their own.

The suggestion comes in the wake of Reliance’s DTH offering - Big TV to be available on a technologically higher platform-mpeg4.

Vikram Mehra, chief marketing officer of Tata Sky said, “Technically, all Tata Sky set top boxes are interoperable and do follow the existing TRAI guidelines.”

Vikram Kaushik, the CEO of Tata Sky, however finds the technical interoperability clause “ill-advised” said that it should be removed altogether.

Source: Hindustan Times

This is something which needs to be resisted at all costs. We, as consumers, want the ability to change services at a whim. Its the same thing which is there in the mobile phone. If there was no interoperability there, then one couldn't just take any SIM & put in there. Technology is there, and I'm sure even the CAM prices would come down, sooner or later (i.e. if enough people come to know about it & how it can be done).


ming & libming - delicious confusion part2

Hi all,
Look at ming page & the libming page. Can somebody make out if the two are same or different. Another case of confusion. There is bug 182491 which talks of doing the same . However what would be great if that could have been done is to merge stuff from libming to ming as in bug requests & stuff. So for people there is just a single package to see (great for new people). For people who are/might be knowing only libming, it could redirect to ming. Also the version history should/could say something.
This is something that needs to be done at launchpad rather than anywhere/anything else. I'm sure somebody has talked of this & put up a bug. It just confuses people & makes them unhappy.

Tuesday, February 12, 2008

GNUNIFY '08

Hi all,
So after couple of days when things have come to some sense, let's see what the GNUNIFY '08 as regards to the previous year. I remember the previous years, when we had just one floor to GNUNIFY '08 ( this time there were three) & we didn't know how to fill one track (with 3-4 parallel tracks) . We were all volunteers who did whatever we could. Now its nice to see the SICSR guys doing their own stuff. Of course you still come across ignorance, but that's the idea of events like these, that the ignorance gets dispelled.

Day 1 :- Like everyone else, it was hard for me to decide where & what I should be going for. In the end on the 1st day ended up attending the whole mobile track as well as some info. about the roadmap of OpenJDK & where Sun wants to take its stuff. The most entertaining was a guy called Alok Barsode who tickled us with his humour, charm & something called GNU/Linux on bluetooth. The more educational was of Kiran Divekar & had nice talk with him as well.

Day 2 :- Again the problem of plenty, this time attended Anant Narayan who talked quite a bit about Mozilla Prism, which is nothing to write home about at this point in time, but which he thinks has some future, although he wasn't able to generate the interest about it. One thing which did come through is that persistent connectivity with Web Applications is the present as well as the future. I was also delighted to meet Sayamindu who has worked to make softwares happen on the OLPC . We had some interesting discussion as to what problems they have been facing, the roadmap they hope to take & the challenges in front of them. I also went to see what a good friend of mine, Rohit Srivastava was doing. He gave a demo of penetration testing, although the CD needs lot of work still. There was also an linuxchix presentation as well which I didn't attend, was totally boggled by the day's proceedings, all in all a heady mix of things I love :)

The pain of being on unstable

For the last week or more most of my applications were/have been crashing for apparently no reason. You name it, it was crashing firefox, abiword, leafpad, oowriter. I had been investigating the issue for few days now, and there had been something which comes on the CLI which had me rankling, the same issue & I didn't know what it might be. Something like this (for e.g.)

(leafpad:2793): GLib-GObject-

CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed
Segmentation fault (core dumped)

Finally while surfing came to know about libgio library or atleast the transition to it, the part where nautilus has transitioned to it while the rest of the desktop hasn't. So it might be one of those things. Of course would have to report application-wise assertion so they know what needs to be fixed. If one wants to know about libgio please go here

Saturday, February 9, 2008

Heapy Python memory debugger

Hi all,
I had been searching for a tool which can be used to debug memory programs I run. I am no programmer but in free time like to help out projects/tools which I use & like. So after a bit of searching I came across Heapy. The tool & the documentation looks cool. I will have to play with it for a while to see how it figures out. Digging a little deeper realized that there has been no svn commits for over a yr. now :( There is also another one named Pysizer but it also is not going anywhere as well. So the only one which can be used for knowing memory-debugging is the well-known valgrind . There needs to be many more tools which can do this. I was looking for something python specific.

bzr and bazaar - What delicious confusion

There are two packages very similarly named, bazaar & bzr. They are both for distributed version control system (dvcs). While work on bazaar has stopped upstream bzr is only just beginning. Its only through the description one comes to know that something is amiss.

shirish@Mugglewille:~$ aptitude show bazaar
Package: bazaar
New: yes
State: not installed
Version: 1.4.2-5.3
Priority: optional
Section: universe/devel
Maintainer: Ubuntu MOTU Developers
Uncompressed Size: 1405k
Depends: libc6 (>= 2.5-0ubuntu1), libgpgme11 (>= 1.0.1), libneon26-gnutls (>= 0.26.2), diff (>= 2.8.1), patch (>=
2.5.9), gawk
Suggests: openssh-client, bazaar-doc, gnupg
Description: arch-based distributed revision control system
GNU Arch is a revision control system with features that are ideal for projects characterised by widely
distributed development, concurrent support of multiple releases, and substantial amounts of development on
branches. It can be a replacement for CVS and corrects many mis-features of that system.

bazaar is an implementation of Arch in C, based on tla. It focuses on making tla's UI more accessible, but also
has smarter merging and gettext support.

Unless you have a pressing reason to use bazaar you should use some other revision control system as upstream
development has ceased.

Homepage: http://bazaar.canonical.com/


shirish@Mugglewille:~$ aptitude show bzr
Package: bzr
State: installed
Automatically installed: no
Version: 1.0-1
Priority: optional
Section: devel
Maintainer: Ubuntu Core Developers
Uncompressed Size: 14.2M
Depends: libc6 (>= 2.7-1), python (<>= 2.4), python (> 2.5) | python-celementtree, python-central
(>= 0.5.8)
Recommends: bzrtools, python-paramiko
Suggests: bzr-gtk, bzr-svn, python-pycurl, xdg-utils
Description: easy to use distributed version control system
Bazaar is a distributed version control system designed to be easy to use and intuitive, able to adapt to many
workflows, reliable, and easily extendable.

Publishing of branches can be done over plain HTTP, that is, no special software is needed on the server to host
Bazaar branches. Branches can be pushed to the server via sftp (which most SSH instalations come with), FTP, or
over a custom and faster protocol if bzr is installed in the remote end.

Merging in Bazaar is easy, as the implementation is able to avoid many spurious conflicts, deals well with
repeated merges between branches, and is able to handle modifications to renamed files correctly.

Bazaar is written in Python, and has a flexible plugin interface which can be used to extend its functionality.
Many plugins exist, providing useful commands (bzrtools), graphical interfaces (bzr-gtk), or native interaction
with Subversion branches (bzr-svn).

Install python-paramiko if you are going to push branches to remote hosts with sftp, and python-pycurl if you'd
like for SSL certificates always to be verified.
Homepage: http://bazaar-vcs.org


Wanna see something really interesting, try doing http://bazaar.canonical.com/ , both
http://bazaar.canonical.com/ as well http://bazaar-vcs.org resolve to the same domain http://bazaar-vcs.org/ Actually Canonical likes or has adopted the bazaar-vcs project and I do like it. Somebody please please tell me why its still there (esp. in Hardy?)

Wednesday, February 6, 2008

The percieved duplication story

Hi all,
Lots of time people try to ask this question as to why there is duplication of efforts in free and open source world. And I have been tired of answering people one by one. So here are the facts :-

1. Duplication comes and exists in the proprietary structures and this structure is just prevalent. And there its much more common due to people wanting to make their own IP.
2. Duplication comes in free software due to some common & some uncommon reasons :-
a. The developer/s of some software is not heeding to needs of a community. Differences of opinion.
b. The needs of some group/s become different from other groups or they want to take the project in some different direction which the developers do not want to go. So creativity does not get wasted. This happens not just to fork the project but sometimes also they come back & merge together. One of the more famous examples of this is the Compiz Fusion marriage which happened sometime back.

Doesn't it all sound like our DNA, which has been forked so many times so we came & we are still forking as well as merging as we feel need.

This forking has some great great benefits as well :-

1. It makes people be on their feet all their time. Competition.
2. When big changes (like an API change) or something then one or the project would make sure to jump onto that. This way features which are needed to make that transition happen smoothly & take people with them happens. This happens all the time, like when KDE jumped from KDE 3.5 to KDE 4.0-4.1 now. We are in the midst of a great transition happening. Or whenever GNOME starts to go from 2.24.x series which its in process now to 3.0 series where things would be in flux for sometime to come. So people have to take a leap of faith.

Well, all said & done I'm all for this percieved notion of duplication. May it thrive, it just tells me there is one more way of doing something if something isn't working. There is a second chance ;)

Monday, February 4, 2008

Bandwidth Scarcity & Last-Mile Community Networks

Hi all,
In last week there was a bandwidth outage due to undersea cable cuts. This resulted in a pretty interesting article ISPs for inter-linking cables . This is when the Indian ISP's have been unable to inter-link between themselves. TRAI tells ISP's to route traffic But still no-go . One can find some more info. at National Internet Exchange of India site. See especially the routing & Members page. What is missing in this whole picture is consumers and community . Please read the book & see the case-studies given here. Its not just the question about poverty but also understanding & sharing responsibilities & benefits to one & all. A solution perhaps to the digital-divide which is between all of us.

Sunday, February 3, 2008

Third-party Repositories & headaches

Hi all,
Another Sunday. For last couple of days I have been fighting a bug on the latest Hardy & have been getting nowhere.

https://bugs.launchpad.net/bugs/188125

While I was talking to people on IRC I came to realize one of the things that a third party I subscribe to may have some influence to some extent in the things.

A package called libgnomevfs2

The repository I subscribe to for bleeding edge :- http://www.sofaraway.org/ubuntu/minirepos/

I'm subscribed to using the latest firefox build, the latest miro, gstreamer & other couple of packages.

While hunting for understanding what could be wrong, I came across this article/blog.

http://www.happyassassin.net/2007/10/24/mistakes/

so far making it authentic have to downgrade the packages.

The package as it was listed :-

shirish@Mugglewille:~$ apt-cache madison libgnomevfs2-0
libgnomevfs2-0 | 1:2.21.91+svn20080131r5441+bbot-1 | http://www.sofaraway.org gstreamer0.10/ Packages
libgnomevfs2-0 | 1:2.20.1-1ubuntu1 | http://archive.ubuntu.com hardy/main Packages
gnome-vfs | 1:2.20.1-1ubuntu1 | http://archive.ubuntu.com hardy/main Sources

The downgrade procedure :-

shirish@Mugglewille:~$ sudo aptitude install libgnomevfs2-0=1:2.20.1-1ubuntu1
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Building tag database... Done
The following packages are BROKEN:
libgnomevfs2-0 libgnomevfs2-dev
0 packages upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
Need to get 261kB of archives. After unpacking 0B will be used.
The following packages have unmet dependencies:
libgnomevfs2-0: Depends: libgnomevfs2-common (<> 1:2.20.1-1ubuntu1 (hardy)]
libgnomevfs2-extra [1:2.21.91+svn20080131r5441+bbot-1 (unstable, now) -> 1:2.20.1-1ubuntu1 (hardy)]

Score is 179

Accept this solution? [Y/n/q/?] Y
The following packages are unused and will be REMOVED:
libavahi-client-dev libavahi-common-dev libavahi-glib-dev libgconf2-dev libidl-dev liborbit2-dev
libselinux1-dev libsepol1-dev
The following packages will be automatically REMOVED:
libgnomevfs2-dev
The following packages will be DOWNGRADED:
libgnomevfs2-0 libgnomevfs2-common libgnomevfs2-extra
The following packages will be REMOVED:
libgnomevfs2-dev
0 packages upgraded, 0 newly installed, 3 downgraded, 9 to remove and 0 not upgraded.
Need to get 1067kB of archives. After unpacking 10.2MB will be freed.
Do you want to continue? [Y/n/?] Y
Writing extended state information... Done
Get:1 http://archive.ubuntu.com hardy/main libgnomevfs2-extra 1:2.20.1-1ubuntu1 [88.3kB]
Get:2 http://archive.ubuntu.com hardy/main libgnomevfs2-0 1:2.20.1-1ubuntu1 [261kB]
Get:3 http://archive.ubuntu.com hardy/main libgnomevfs2-common 1:2.20.1-1ubuntu1 [718kB]
Fetched 1067kB in 1min55s (9216B/s)
(Reading database ... 304193 files and directories currently installed.)
Removing libgnomevfs2-dev ...
Removing libavahi-client-dev ...
Removing libavahi-glib-dev ...
Removing libavahi-common-dev ...
Removing libgconf2-dev ...
Removing liborbit2-dev ...
Removing libidl-dev ...
Removing libselinux1-dev ...
Removing libsepol1-dev ...
dpkg - warning: downgrading libgnomevfs2-extra from 1:2.21.91+svn20080131r5441+bbot-1 to 1:2.20.1-1ubuntu1.
(Reading database ... 303689 files and directories currently installed.)
Preparing to replace libgnomevfs2-extra 1:2.21.91+svn20080131r5441+bbot-1 (using .../libgnomevfs2-extra_1%3a2.20.1-1ubuntu1_i386.deb) ...
Unpacking replacement libgnomevfs2-extra ...
dpkg - warning: downgrading libgnomevfs2-0 from 1:2.21.91+svn20080131r5441+bbot-1 to 1:2.20.1-1ubuntu1.
Preparing to replace libgnomevfs2-0 1:2.21.91+svn20080131r5441+bbot-1 (using .../libgnomevfs2-0_1%3a2.20.1-1ubuntu1_i386.deb) ...
Unpacking replacement libgnomevfs2-0 ...
dpkg - warning: downgrading libgnomevfs2-common from 1:2.21.91+svn20080131r5441+bbot-1 to 1:2.20.1-1ubuntu1.
Preparing to replace libgnomevfs2-common 1:2.21.91+svn20080131r5441+bbot-1 (using .../libgnomevfs2-common_1%3a2.20.1-1ubuntu1_all.deb) ...
Unpacking replacement libgnomevfs2-common ...
Setting up libgnomevfs2-common (1:2.20.1-1ubuntu1) ...

Setting up libgnomevfs2-0 (1:2.20.1-1ubuntu1) ...

Setting up libgnomevfs2-extra (1:2.20.1-1ubuntu1) ...
Processing triggers for libc6 ...
ldconfig deferred processing now taking place
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Writing extended state information... Done
Building tag database... Done

Atleast this is done, documenting for self. It took me lots of time to get it right because aptitude doesn't tell what's missing if the argument is not right. Of course its gonna take quite a bit of time to understand where things are going wrong. Also would be editing the third-party gstreamer so I can test packages from Hardy main rather than the repo for gstreamer atleast.

Of course the jury (i.e. me) is out whether after commenting will Hardy get the news & give me updates about gstreamer from its archive. It does logical that it should now that the other repository is down but still who knows. I intend to find out in the upcoming days.

An update :- Finally I'm synced with Hardy the downgrade is complete. I should thank Ubulette without whose help (alongwith the shell script) would have taken more days to complete. And at first glance things seem to be back to normal/good. Let's see.