My poor much abused laptop tends to get introduced to a lot of networks, most of which happily use DHCP, but a few which need special setup such as static ip addresses etc. In general, Network Manager handles this fairly well, however I've yet to find a way within Network Manager to set a default set of search domains for all connections.

The most reliable method I've found to implement this is the resolvconf package. Install this by running "sudo apt-get install resolvconf" and then edit /etc/resolvconf/resolv.conf.d/base to add the following line:
search domain1.com domain2.com
Tell Network Manager to restart whatever connection your on, and /etc/resolv.conf should have the above line in it.

An added advantage to this method is that resolvconf is smart enough to look at any search domains set via DHCP or that you might have added to the connection in Network Manager and append them to the search line.

Google recently released a SSL enabled version of their main page, which is no bad thing. However it turns out there's a bit of a nasty side effect for companies doing search analytics. When you go from an SSL site to a site without SSL, most modern browsers will stripe out the referrer data. In the case of going from an SSL enabled Google to a normal non-ssl site, it means that the non-ssl site will have no idea of what search terms were used.

Of course there is a way around this. The simplest is just to SSL enable the site. If you go from one SSL enabled site to another SSL enabled site, the referrer data is retained. There are other such as Google appending something like ?query="search term" to each url it returns, however even if this is implemented I can see it being an optional for the user.

Of course the problem with SSL certs is that you need a dedicated IP Address for each SSL enabled site. There's extensions to TLS which would mean that you could host multiple name based virtual hosts on one IP, see Section 3.1 of RFC3546, but I have yet to see significant support for this. As it stands at the moment, IPV6 is probably better supported than the Server Name Indication extension of TLS.

So, if a company wants a fast way of getting the referrer from an SSL Google query, the handiest method is probably to SSL enable their site, which means a dedicated IP address. Anyone who has got this far in the post probably already knows that IPv4 addresses are slowly running out. If every SEO in the place suddenly wants to enable SSL on their customer's sites, there's suddenly going to a lot of pressure on the IPv4 address space.

I know that if a relativity small percentage of shared hosting sites at work wanted to SSL enable their sites in the morning, we'd run out of available IPv4 addresses in a flash. However, we do have ~4,000,000,000 IPv6 addresses available which should be sufficient! It's just a pity that most ISPs wouldn't be able to get to them at the moment.

The big winner in this would be the companies selling the SSL certs. People could use a self signed cert, but do they really want customers/potential clients to have to click through the various warnings. There's other options such as CACert, but not all browsers will recognise them as a valid cert.

My own opinion is that the lack of referrers is no bad thing. It might force sites to stop using under hand tricks and just put up proper content.
It would seem that random pie in the sky figures about server virtualisation is one of my berserker buttons. I work in IT, hence I know that everything in IT is a compromise. So when someone on twitter quoted figures from a Sunday Business Post article stating that the HSE were using 200 servers, and then immediately proclaimed that virtualisation would reduce that number by 75%, I had to respond. Anyone on twitter is free to look it up.

At work we use virtualised servers extensively. Our whole shared hosting/VPS platform is built on Virtuozzo. We have numerous other services which are virtualised in the the background using other technologies such as Xen, KVM, Hyper-V etc. It is a brilliant tool when deployed properly and has plenty of other benefits such as being able to move an virtual server to new hardware in a hurry.

However, if you are to believe the marketing hype, virtualisation will immediately save you X% where X is ridiculously large number like 70 or 80. What they always seem to fail to mention is that they're presuming that you're massively under utilising your current hardware.

This leads to a lovely self fulfilling prophecy. The people who move over are the ones underutilising their current hardware and they will see massive savings. These savings are due to bad planning and over speccing the hardware in the first place though, and virtualisation is the ideal technology to consolidate the hardware while keeping the outward facing infrastructure looking the same. This means there's a massive selection bias in the figures which virtualisation vendors quote, as they seem to only use these customers as examples.

If we then look to the other end of the spectrum, people properly utilising their existing infrastructure. Here virtualisation will still give plenty of benefits. For example, being able to move a virtualised server from physical server to physical server, often with no downtime. However, then you have to consider virtualisation overhead. As virtualisation is simple abstracting away the hardware, there is going to be an overhead in the translation. Depending on the technology used the overhead might be minimal or it might be large enough that new hardware is required to account for it.

There will also be no savings due to less hardware in this scenario as the virtualisation isn't being used for consolidation, but for ease of management. If it's a commercial virtualisation product such as VMWare, there's going to be extra cost involved. This cost might be offset in deceased administration time, but it's not going to be anything near the figures normally quoted for savings.

To go back to what started all this off, the 200 servers in the HSE. We have no way of knowing what the utilisation is like on these servers. For all we know, it's a fairly heavy Java based app running on them and the systems are well utilised. It's also possible that they are underutilised, but without knowing what they're actually doing, it's not possible to pull random figures like 75% out of the air.  

DNSSEC Still Pie In The Sky

| 4 Comments
Affilias recent put a post claiming that DNSSEC is no longer pie in the sky! The post immediately proclaims than DNSSEC would have stopped the issue on Mar 24th where a Chinese root server was leaked outside of China. While this is technically true, they seem to be vastly underestimating how far off we are from seeing this happen.

Starting at the client level, whether it is a browser, mail server or mail client. At the moment very few clients have native support, and most seem to need to be patched which is not something the vast majority of end users would be comfortable doing. Microsoft only seem to be supporting DNSSEC in Windows 7 and Windows 2008, although I could be wrong on this. Then there's the variety of browsers on the variety of mobile devices. In all cases it's more likely that you'll have IPv6 support!

The next step would be the dns resolver that the client talks to. This could be your ISP's resolver, your local router, a third party such as OpenDNS and Google or possibly a dedicated local server. At the moment then chances of them being DNSSEC enabled is minuscule.

In the case of local routers (CPE), Nominet tested a cross section of CPE devices in 2008. The result?
As a consequence, we conclude that just 6 units (25%) operate
with full DNSSEC compatibility "out of the box." 9 units (37%)
can be reconfigured to bypass DNS proxy incompatibilities.
Unfortunately, the rest (38%) lack reconfigurable DHCP DNS
parameters, making it harder for LAN clients to bypass their
interference with DNSSEC use.
Of course even if the router supports DNSSEC, you then have to make sure that the upstream DNS servers support it, which is by no means a given. Comcast are still only testing it which probably puts them well ahead of their competition.

Then you have to make sure that any firewalls between you and the upstream DNS server are correctly setup. It's not unknown for Network Admins to only allow UDP packets over port 53. This will break horribly with DNSSEC as the response to a query will be a lot bigger so it's very likely that the server will have to fall back on TCP. Even if the the Network Admin has opened TCP port 53, it's possibly that the firewall "knows" that a DNS packet can ever be larger than X bytes, and will indiscriminately drop any packets larger than it's set limit.

Then there's the root servers and the various TLD servers. The earliest that we'll see a signed root zone is July 2010, and that's presuming that their testing goes well. PIR have implemented it on .org already, and various other cctlds have either implemented or have testbeds. Verisign have said that Q1 2011 is when they expect to have it rolled out for .net and .com.

Presuming that all the above has been fully implemented, it's possible that DNSSEC would have stopped what happened on Mar 24th. However, then there's the leaking of more specific routes such as what happened Youtube in 2008, but that's a different problem with different fixes.

The above is only a very quick and nasty overview of the issues with DNSSEC at the moment as far as a client is concerned. There's plenty of other issues to be sorted out such as transferring domains and key rollover among others.

Then there's the human element. Phishing won't be cured by DNSSEC, most phishing attacks use absolutely random urls, such as http://this.is.a.fake.url.com/path/to/bankhomepage.com/login.html.  The deployment of DNSSEC also won't force people to upgrade their browsers, IE 5 and IE 6 still make a good percentage of the the browsers out there!

Unfortunately, DNSSEC is going to remain very much pie in the sky for the time being. 

Enhanced AIB Security?

Just after logging into my AIB Internet Banking account, and I spotted the following security notice:

From June 23rd you will be required to enter two codes from your AIB Code Card in order to complete the following actions on AIB Internet Banking:

This is only required for certain transactions, but it still seems to be a useless change. If someone has one code, the odds are extremely good that they have the code card. If not, the second code can probably be obtained using exactly the same method as was used to get the first.

If they really wanted to enhance their security, they might be better off deploying something like Rabo Direct's Digipass. I believe they already have something similar for their Business Banking. Unfortunately, this probably won't be done due to cost.

To go slightly off topic, the new AIB Internet Banking site is a vast improvment over the previous incarnation.


Nokia Divide By Zero Error

I think the Nokia Beta Labs need to do a bit more QA, their Enhanced Calculator for the the N96 has a slight issue:

Nokia_Divide_By_Zero.jpg

This can be replicated by installing the app, opening it and hitting the button in the middle of the directional pad.



An Post Doing Something Right?

I'm pretty amazed! I sent a normal letter from the main Post Office in Carlow to Kerry at 5:45PM yesterday. Got a text at 12:30 this afternoon to say that it had arrived! It's nice to see that at least one public service in this country is doing something right.

Grannymar Toyboy?!

| 1 Comment
Seemingly I'm now an official GrannyMar Toyboy. I've been given the badge and informed that I have to be photographed wearing it!

However, I didn't want the poor camera broken, so I got a dodgy shot of my super sub with the badge instead.
toyboy.jpg

The Folly Of Audiophiles

What's the point of even worrying about whether clothes hangers are better than monster cables, when in most cases we can't even hear the stereo as intended?

Of course if you're really worried about the files from itunes, you can get the uber Denon AK-DL!

Reinventing The CLI Wheel

As part of my day to day work I spend a lot of time on the command line. In the vast majority of cases this means ssh into devices as diverse as Linux Servers, Cisco Switches, Juniper Routers and Fortinet Firewalls. While in some cases there will be a GUI available, it's a lot easier to document, script and backup what is being done on the CLI. Ssh also the advantage that it can be accessed on anything from a mobile phone to a perl expect script.

I have had the chance to play with a Dell MD3000i over the last few days, which is basically a rebadged LSI/Engenio SAS Raid Array. It's a nice bit of kit however Dell have seen fit to use the SMI interface for managing the array. The SMI interface is great idea which means that there is a nice "object-oriented, XML-based, messaging-based interface" (buzzword overload!) for doing day to day managment.

There is a CLI interface to this in the form of SMcli. In the case of Dell, this is a java app which requires sacrificing goats and/or virgins in order to get running on anything other that Windows, RHEL or SLES. So much for Java allowing platform independence!

What annoys me is that people have gone to the trouble of creating SMcli, so why not use it as a shell on an ssh server running on the array itself. This would all of sudden mean that they gain a lot more platform independence, and therefore a larger potential market. The other technologies needed in order to setup the MD3000i are iscsi and dm-rdac which are already a solved problem and relatively easy to setup.

Am I mad in thinking that it's in Dell's best interests to put as few obstacles as possible in the way of setting up their products?

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Niall Donegan: Can you give me some real world examples of people read more
  • Colm MacCarthaigh: About 40% of real-world https connections contain an SNI option read more
  • Niall Donegan: For better or worse, Windows XP is good enough for read more
  • Michael: An interesting observation on what Google using HTTPS for search read more
  • Niall Donegan: Come to think of it, perhaps this post should be read more
  • Niall Donegan: Last time I looked the support wasn't near that good, read more
  • Colm MacCarthaigh: Support for TLS SNI; http://en.wikipedia.org/wiki/Server_Name_Indication - which obviates the need read more
  • Niall Donegan: So, bandwidth providers and hardware providers will be happy as read more
  • talideon.com: Caveat lector: Mass use of SSL would be flaming death read more
  • 6p00e3981de1998833 [typepad.com]: Unless you know what is being done and how it read more

Pages

OpenID accepted here Learn more about OpenID
Creative Commons License
This blog is licensed under a Creative Commons License.
Powered by Movable Type 5.02