2011-02-11

Getting git accessible via smart http with LDAP integration

I work with a legacy development group which has been engulfed into a larger company. The legacy company was started in 2001 and at the time it was much faster to set up NIS/YP for our directory service as compared to LDAP, plus I also think that amd (our automounter which is also showing its age due to neglect by upstream) doesn't support LDAP maps. Well, I bet it is still much faster to set up NIS, but I would have a lot harder time justifying using the ancient NIS technology now. But in any case, the larger company uses LDAP and thus has a disconnected authentication system from us—which is very advantageous since we thus still have full administrative priviledges over our systems which would not be true if we were using their LDAP. However, this means that when someone in the other company needs access to our SCM we need to create local shell accounts for them. Tedious for us and I'm sure not fun for them due to the new password they need to remember. Some other day I might try to get a hybrid NIS/LDAP system running, but that day is not today.

In any case, today the situation came up again that someone needed access to our source. Initially I knew it would be read-only, so git-daemon was the obvious solution. So I created:

/etc/xinetd.d/git-daemon
# default: on
# description: The git server provides read-only git service for /git
service git
{
  disable = no
  socket_type = stream
  wait = no
  user = root
  log_on_success += USERID
  log_on_failure += USERID
  server = /usr/local/bin/git
  server_args = daemon --inetd --verbose --export-all /git
}

One `/etc/init.d/xinetd reload` later and I am all set. But…I know that sometime I am going to need to provide read-write access for this user. Creating another shell account is tedious as I already mentioned, so what could I do about this?

Well, Smart HTTP would seem to be the best solution. It uses the web server (Apache) for authentication and I have already explored doing LDAP integration with Apache+PHP, so that should be fairly straightforward. However, I've never really played with smart http yet, so it is a good learning experience. Well, to start I'll get read-only access going:

/etc/httpd/conf.d/git-http-backend.conf

SetEnv GIT_PROJECT_ROOT /git
SetEnv GIT_HTTP_EXPORT_ALL
ScriptAlias /git/ /usr/local/libexec/git-core/git-http-backend/

This is straight out of the git-http-backend man page, so I expected it to work. One `/etc/init.d/httpd reload` later and all is well. `git clone http://git/git/testrepo` works perfectly. Next is getting LDAP working. Forunately as I said I've done LDAP authentication with Apache before and during the development of that functionality I managed to connect to the larger company's LDAP server. But since they were using Active Directory and I had a lot of problems getting the DN for myself I just wanted to double-check my configuration.

shell> ldapsearch -x -W -H ldap://ldap -D 'CN=Seth Baka,CN=Users,DC=example,DC=com' -b "DC=example,DC=com"
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

Uh what? This was the exact command from my shell history (a 16k line HISTFILE is occasionally very useful) so how could it fail? I know my password is correct. Could they have zapped LDAP access from me? After various experiments from different machines and using different binding names, I came across a very informative webpage which said that you could stick your email address in for the bind DN so you don't have to know the exact syntax of your DN.

shell> ldapsearch -x -W -H ldap://ldap -D 'sethnobaka@example.com' -b "DC=example,DC=com"

# numResponses: 1002
# numEntries: 1000
# numReferences: 1

Success! Further investigation of the resulting LDIF file showed that they rearchitected their LDAP schema without telling me—very rude. Using the new DN for me I'm able to bind and I'm well on my way.

/etc/httpd/conf.d/git-http-backend.conf
SetEnv GIT_PROJECT_ROOT /git
SetEnv GIT_HTTP_EXPORT_ALL
ScriptAlias /git/ /usr/local/libexec/git-core/git-http-backend/
<locationmatch "^/git">
   AuthName "Git Repos"
   AuthType Basic
   AuthBasicProvider ldap
   AuthzLDAPAuthoritative off
   AuthLDAPUrl "ldap://ldap/dc=example,dc=com?sAMAccountName"
   AuthLDAPBindDN sethnobaka@example.com
   AuthLDAPBindPassword mypassword
   Require valid-user
</locationmatch>

Well, I have to say that having to store my personal LDAP password in an apache config file is pretty annoying. Can't apache check the authentication via a bind instead of binding first and then validating accounts and the like? Pretty annoying if you ask me, but the advantage is that multi-domain schemas get handled more automatically, so I can see some small benefit. In any case, I should be authenticating so lets see if we can clone again. After a quick `/etc/init.d/httpd reload` and another, `git clone http://git/git/testrepo`, I am prompted for my password (via an X popup) and then am able to clone. Huzzah!

Now lets try to push. It probably won't work, but the error should be informative. After I committed a tiny change, as expected the push fails.

shell> git push
error: unpack failed: unpack-objects abnormal exit

Looking in the httpd error log shows me:

error: insufficient permission for adding an object to repository database ./objects

Well, no surprise there. The apache user is not allowed to write into my git repository. I expected this, but at least not failing sooner is a good sign I guess. All I need to do is…hmm…I guess create a user in the correct group for apache-git integration and then use Apache's suexec to get it to run under the correct UID. No problem.

/etc/httpd/conf.d/git-http-backend.conf

SetEnv GIT_PROJECT_ROOT /git
SetEnv GIT_HTTP_EXPORT_ALL
SuexecUserGroup gitldap dev
ScriptAlias /git/ /usr/local/libexec/git-core/git-http-backend/
<locationmatch "^/git">
   AuthName "Git Repos"
   AuthType Basic
   AuthBasicProvider ldap
   AuthzLDAPAuthoritative off
   AuthLDAPUrl "ldap://ldap/dc=example,dc=com?sAMAccountName"
   AuthLDAPBindDN sethnobaka@example.com
   AuthLDAPBindPassword mypassword
   Require valid-user
</locationmatch>

Well OK, I actually find it pretty annoying that suexec cannot be constrained to a particular CGI or Directory as far as I can see, but it doesn't seem to matter yet, so off we go.

shell> /etc/init.d/httpd reload
shell> git push
error: The requested URL returned error: 500 while accessing http://git/git/testrepo/info/refs

I will not belabor the restarts and tests I did to track down the error, but suffice it to say that suexec is very anal about where the programs can go (/var/www for me) and whether or not they can be symlinks (hint, they cannot), and who the owner must be (why is root not a fine owner?). So in the end I had to write a wrapper shell script and modify the apache configuration:

/etc/httpd/conf.d/git-http-backend.conf

ScriptAlias /git/ /var/www/cgi-bin/git/git-http-backend/
SuexecUserGroup gitldap dev
<Directory /var/www/cgi-bin/git>
AllowOverride None
Options +ExecCGI
Require valid-user
</Directory>
<LocationMatch "^/git/">
AuthName "Git Repos"
AuthType Basic
AuthBasicProvider ldap
AuthzLDAPAuthoritative off
AuthLDAPUrl "ldap://ldap/dc=example,dc=com?sAMAccountName"
AuthLDAPBindDN sethnobaka@example.com
AuthLDAPBindPassword mypassword
Require valid-user
</LocationMatch>

/var/www/cgi-bin/git/git-http-backend
#!/bin/sh
export GIT_PROJECT_ROOT=/git GIT_HTTP_EXPORT_ALL=ALL
cd $GIT_PROJECT_ROOT
exec /usr/local/libexec/git-core/git-http-backend "$@"

Restart and try to push again as normal. Success! Now we finally are able to push, except…the email announcement is coming from the wrong email account due to the suexec user being used now instead of the normal user's shell account. Forging email in the post-receive script solves that problem. But the next problem is that gitweb stopped working, and it stopped working due to SuexecUserGroup. I still really don't understand why you are not allowed to restrict the SuexecUserGroup by location. Doing it by virtual hosts is a dull ax as opposed to a fine scalpel. I can't divide by virtual hosts here, so I need to do the same thing I did before: create a wrapper shell script in the right location. Of course, I have to use a different right location since I don't want the safety "Require valid-user" I used with cgi-bin/g/ to take effect, and even worse, I need to dance around the location of the static files.

/etc/httpd/conf.d/gitweb.conf
Alias /gitweb/static/ /var/www/html/gitweb/static/
ScriptAlias /gitweb/ /var/www/cgi-bin/gw/

/var/www/cgi-bin/gitweb.cgi
#!/bin/sh
exec /usr/local/share/gitweb/gitweb.cgi "$@"

Now, finally, everything seems to be working. This was more stupid than it should have been, so I hope that someone who need to fight the same stupidity can use this little guide.

2011-01-22

Gentoo, initrd, mdadm, and a online conversion of the root filesystem

My gentoo system froze last night kinda mysteriously. I/O to /var seemed to freeze, which is extremely odd since /var is not a standalone filesystem but rather is on / and access to other random parts of / didn't seem to go wrong. Very mysterious and I eventually got myself into a state where I had to start doing magic sysreq keys to recover. Strangely C-A-S-e (which terminates processes) cleared the problem, whatever that problem was. This also allowed the system to log a number of messages which showed various processes hung in a variety of reiserfs journal calls. Cause or effect? No way to know for sure.

While my system rebooted cleanly, it seemed like I should take this opportunity to convert away from murderfs to a filesystem with a future. Without spending too much time thinking about it, I picked ext4 (remember this is for /—my main storage filesystem remains xfs). Unfortunately I later discovered that I failed to compile ext4 support in my kernel, but that was easy enough to resolve so I will not recount that excursion further.

Because I am not an idiot, I have RAID configured for my filesystems. On this system, I am using RAID-1 for all filesystem activities. I have three identical disks, so I am actually using 3-way RAID-1, meaning I can lose a disk and still have redundancy. Because I am really not an idiot, I am not using hardware RAID. Hardware RAID is nice and all, but I hardly think it is going to give you much performance gain on RAID-1 (ignoring any mythical battery backed cache) and the control you lose by having an embedded RAID controller is pretty serious. I scrub my raid partitions every week so I can discover any disk corruption before it gets serious—something that is normally difficult or impossible with embedded controllers. But in any case, what this (software RAID-1) really means is that I can convert my root filesystem online. In fact it is pretty simple. First boot into single user mode (normally adding "single" to the kernel boot line is sufficient, but with gentoo's stupid initrd option processing, you have to "mother may I" it using "init_opts=single". But once you know this it is only a minor annoyance. I have a fake kernel boot option which documents this so that I don't have to remember or google it during disaster recovery.

mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm --create /dev/md3 --level=1 -n 3 --metadata=0.90 /dev/sdc1 missing missing

While the commands are all mostly pretty obvious—remove a partition and then create a new raid using that partition—the last command could probably use some explanation. I decided to inform the newly created raid array that the end goal would be to have three devices. I believe this is not strictly necessary for RAID-1, but this way everything is reserved up front—so I specified 3 devices and said the other two were missing. The other interesting bit is the --metadata option. The newer metadata versions for mdadm provide more power (and specifically power that I could really have used later), but it appears that you have to have a grub which supports the newer mdadm superblock format. Unfortunately, the grub that I have does not support it (I don't really understand why Gentoo hasn't upgraded, my guess is because you would need to reinstall the boot blocks which is probably a bit tricky for some users. Certainly it is non-obvious how to do so for software RAID users. So I went ahead and specified the old-style metadata.

mkfs.ext4 /dev/md3
mount /dev/md3 /mnt/usb
mount -r -o bind / /mnt/cdrom

Well, obviously I used two random /mnt directories that I had lying around. The interesting bit here is my use of a bind mount of / so that when I copy the root filesystem, I can see the data underneath the mount points. So I neither copy nor skip /proc (for example), instead I make a faithful copy of the /proc directory hidden underneath the mount point. Speaking of faithful copies:

cd /mnt/cdrom
tar cSf - . | (cd /mnt/usb; tar xSf -)

I use S to support sparse files. This is a real danger on filesystems with /var/log on them (wtmp and lastlog, etc) and is a good idea anytime. Because I don't have a /boot on this particular system, I need to install boot blocks—well need is perhaps too strong of a word since I won't actually be booting from /dev/sdc, but it is still a good habit to get into.

touch /boot/magic-nounce
grub
find /boot/magic-nounce
root (hd2,0)
setup (hd2)
quit

As you see here, I am creating a temporary file to let me know which of the many copies of /boot I have on the system are the new ones which I need to install boot blocks for. I then run grub and ask it to find that file, and then go through the normal installation process for the identified partition. Only one more minor fixup.

emacs /etc/fstab /etc/mdadm.conf
# Replace reseirfs with ext4 for /

Yes, yes. fstab doesn't actually control the mounting of / so it really doesn't matter that the filesystem is accurate. However, best remove any contradictory information to avoid future confusion. In a fit of obsessive/compulsive behavior, I go ahead and update my manually specified raid assembly lines (by UUID) in /etc/mdadm.conf. I don't think many people bother to touch this file nor do I think it is used in my configuration, but whatever.

Now I am ready to do a test boot. At this stage, I was under (false) the impression that it generated the /dev/md# numbers in the order that it found md partitions on disk, but since I converted sdc1, I believed that it would not be detecting the new root first. I was wrong about the reason, but the effect was the same.

init 6
# Interrupt the grub auto-boot
# Replace /dev/md0 with /dev/md3 on the real_root option, add init_opts=s

This allows me to have the kernel boot retrieved from the new raid partition I created. I can validate that everything is OK while in read-only mode. I checked /proc/mdstat to ensure that the raid was created the way I assumed (it was) and then /etc/fstab for ext4 to ensure that I my / was from the correct RAID (it was). I then did an "exit" and let the system boot to multi-user. Everything seemed good. Now I am ready to start switching over.

mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --manage /dev/md3 --add /dev/sda1

I flipped the bios boot disk over to the new raid partition so that bios would be using the new boot blocks and so forth. I was still under the impression here that this would automagically set md0 to be the new partition. While the raid was resynchronizing the new device I added, I went again and installed the boot blocks

grub
find /boot/magic-nounce
root (hd0,0)
setup (hd0)
quit
watch cat /proc/mdstat

After the boot block installation, I watched (`watch cat /proc/mdstat`) the mdstat file to wait for the raid to be fully synchronized. Once that was done, ^C and a rebooting we go.

init 6
# Interrupt grub auto-boot
# add init_opts=s

After booting into single user mode, I did a quick check (of fstab) to see if I was using the correct root partition. Uh, no. I discovered that I was completely wrong about the kernel numbering by discovery order (which is good in general, just bad for my hopes for a clean and fast conversion). I then started looking at manual pages and google to try and find what the magic was. I quickly found out that there was a preferred minor device number, but…there is no way to manually specify it! (at least for the 0.90 metadata version—more recent versions have the name option which I hope overrides the saved number). Instead it uses the last minor/md number the drive was assembled under. I couldn't believe it. Sure, for partitions other than / it is no trouble to boot into single user, remove any auto-assembly, and then re-assemble the devices with the names you desire, but if you are mucking around with / you are kind of out of luck here. Fortunately there is a way to ask the kernel to assemble the devices the way you want. Hurrah!

init 6
# Interrupt grub auto-boot
# add md=0,/dev/sda1,/dev/sdc1 init_opts=s

Uh…no joy. It is clearly documented so this should work. Well, perhaps the kernel auto-assembly goes first. Fortunately, there is a way to ask the kernel to assemble the devices the way you want. Hurrah!

init 6
# Interrupt grub auto-boot
# add raid=noautodetect md=0,/dev/sda1,/dev/sdc1 init_opts=s

Uh…no joy. It is clearly documented so this should work. Hmm. Well…I notice that the initrd seems to be printing some lines about mdadm assembly. Could it be reverting the kernel's assembly in some fit of idiocy? Clearly it is not using a copy of /etc/mdadm.conf from the true root filesystem since I happened to update that beforehand with the correct UUID when I added ext4 support, so it must be undoing what the kernel did and re-doing auto-assembly. Nice. Not.

init 6
# Interrupt grub auto-boot
# Remove initrd line from current config
# Change root=/dev/ram0 to /dev/md0
# Remove init=/linuxrc real_root=/dev/md0
# Add md=0,/dev/sda1,/dev/sdc1 s

Joy! I finally am booted to /dev/md0 with the right /etc/fstab. The system appears to have renumbered the old /dev/md0 to /dev/md127. If for some reason you require an initrd to boot—say you don't have md driver loaded into kernel memory—you have two options. First is to boot off a rescue CD (like the gentoo install CD). Second is to build a new kernel with the md driver loaded (mount /dev/md3 onto /mnt/cdrom, chroot /mnt/cdrom, cd /usr/src/linux, make menuconfig, save the config and then do your normal genkernel thing).

cat /proc/mdstat
head /etc/fstab
mdadm --detail /dev/md0
mdadm --examine /dev/sda1
mdadm --examine /dev/sdc1
mdadm --detail /dev/md127
mdadm --examine /dev/sdb1

Inspecting the output of the detail and examine mdadm commands, we can see that the "Preferred Minor" number has been properly reset…for everything except /dev/sdb1. Well, I remember from the man page how to fix that.

mount -r /dev/md127 /mnt/cdrom
mdadm --examine /dev/sdb1

Everything looks quite nice. Now let's see if going through a normal boot with initrd will work (well, honestly this kinda looks like I might not need initrd, but I will ignore that thought and press on).

init 6
# Interrupt grub auto-boot
# Add init_opts=s

Still nice. fstab shows the right root, I am booted from /dev/md0. Everything OK.

head /etc/fstab
cat /proc/mdstat
exit

I press on to multi-user mode. Everything still looking good. At this point, I have a clean boot using my brand new ext4 filesystem so the last remenants of resierfs can be swept away.

mdadm --stop /dev/md127
mdadm --manage /dev/md0 --add /dev/sdb1
grub
find /boot/magic-nounce
root (hd1,0)
setup (hd1)
quit
watch cat /proc/mdstat

One my raid is resynchronized, I can rest on my laurels…though honestly are laurels comfortable to rest on? Back in the days of the roman empire sure, but now? A nice memory-foam is probably much more comfortable.

2007-11-05

Uniden DECT 6.0 DCX100 is pretty stupid

A while back I wanted to replace my current wireless phone system with a new one, so that I could have more phone extensions. After a little research I chose the Uniden DECT 6.0 DCX100. I got it home, fired it up, and immediately found some problems. These problems were not so serious as to force me to return the phone, but they are very annoying.

One serious/annoying problem is that the speakerphone has the squelch set incorrectly. By squelch, I mean the volume the remote person must speak at to engage the external speaker on the phones. Generally, unless the person I am listening to is a serious loudmouth, the speakerphone is unusable because I only hear part of every other word. The related lack of an external headset jack makes this more serious.

A very annoying problems are all related to putting people on hold, transferring calls, and conference calls. When you put anyone on hold either to put them on hold or to transfer the call to another phone, you get some really really silly fake (MIDI or similar) carrousel music. This is silly but fine when you are talking to friends or family, but when you are on a conference call or a customer call, it really is pretty embarrassing.

The phone documentation suggests that conference calls are impossible except involving the base station. Silly me, I believed it. However, in preparing this rant I looked into the base station documentation to see how that worked only to discover that it tells how to conference between two phone. The method is non-intuitive, but it does work. You still are forced to hear that silly music, though.

Some of the menuing is pretty lame/confusing, but this is kind of normal I fear.

The other features seem fine. Range is good. Battery life is fine. The ability to push phone list changes around is good (though at least in my environment, the requirement that you have to do stuff on both phones to push a change is annoying so a shared address book would be better). I still have not decided whether or not the beeps on all phones when there is a message is good or not. The key lock so that you can put the phone in your pocket is good.

The moral of the story? Hmm. Even when the documentation says it is impossible read more documentation? Try try again? Uniden documentation sucks? Writing well researched rants can sometimes lead to enlightenment? Well, something like that.

2007-11-02

Fighting exchange calendaring

My company decided to outsource email for reasons that surpass knowing. What I do know is that mailstreet must have decided to do everything in their power to prevent any normal method of accessing the raw data for third party applications from working. Even getting outlook working requires jumping through hoops well beyond the normal. Since I don't have windows on my desktop, and even if I did, outlook doesn't have the very simple features I demand from a calendaring application(*) I wanted to force the data to something that could.

A little investigation proved that google calendar seemed to fit the bill. While their documented help suggested that it was impossible, it actually turns out that you can convince google to auto-accept the appointments you send it with your primary calendar. This told me that my goal was possible, but still google would not auto-accept calendar events if I forwarded them from exchange via outlook rules. I was confused by this for some time since I eventually got all of the mail headers to be exactly the same, but then I realized the distinguishing factor google was using was actually in the MIME calendar body. Well, there was nothing that forwarding from exchange was going to do about that, but I already was pulling email down to my desktop using fetchmail, so I could simply slip in a procmail rule, write a perl script, and be on my way. The perl script was more complicated than I truly desired, possibly because I was being a bit anal about working with the various ways that Exchange has been known to to send events as opposed to the absolute minimum necessary. Still, I'm now able to export most events automatically from Exchange get them imported into Google calendar automatically, and then get them downloaded into my phone automatically. I then get a 5 minute beforehand reminder from my calendar and a 15 minute beforehand email. What more could I possibly want?

Well, this only works for calendar events people schedule me for and for calendar events that I schedule for an alias I am on. Any appointment I create for myself or meeting I create for individual addresses including my own address does not get forwarded by exchange? Why? Only Microsoft knows. Other outlook rules fire, just not forwarding emails. So, if I want to create an appointment for myself, I need to cc my gmail account directly.

(*) At a minimum, I demand my calendaring application sends email at a configured time before the event to an email address, like say a SMS address, of my choosing. At the maximum, allow calendar downloads with the same configurable reminders to my semi-smart phone (smart enough to synchronize calendars, not smart enough to run windows mobile--or is that smart enough to not run windows mobile?)).

Don't take this as code I am particularly proud of. It seems to do what I need
and really I have already spent too much time on this puppy anyway. Also it looks like blogger.com was not set up to include code snippets and some bits may not have been included properly (sigh). If you have problems, let me know and I'll see if a cleaner copy can be made available.


#!/usr/bin/perl
#
#
#
use MIME::Tools;
use MIME::Parser;
use Data::Dumper;
use Getopt::Long;
use MIME::Decoder;

my($USAGE) = "Usage: $0: <--to emailaddress> <--from emailaddress>\n";
my(%OPTIONS);
Getopt::Long::Configure("bundling", "no_ignore_case", "no_auto_abbrev", "no_getopt_compat", "require_order");
GetOptions(\%OPTIONS, 'debug', 'to=s', 'from=s') || die $USAGE;

die "$USAGE" unless ($OPTIONS{'to'} && $OPTIONS{'from'});

my $msg = <>;
$msg = <>; # Skip leading line
my $envelope = $msg;

while (<>)
{
$msg .= $_;
}

exit(0) unless ($msg =~ m^Content-Type: text/calendar^);

my $parser = new MIME::Parser;
$parser->ignore_errors(0);

eval
{
my $entity = eval {$parser->parse_data($msg)};
my $error = ($@ || $parser->last_error);
if ($error)
{
die "$error\n";
}

my $head = $entity->head();
my $body = $entity->body();

for($p=0;$p<$entity->parts;$p++)
{
my $part = $entity->parts($p);
if ($part->mime_type eq 'text/calendar')
{
my $io = $part->open('r');
my @lines;
while (my $line = $io->getline())
{
if ($line =~ /^ (.*)/s)
{
my $cont = $1;
chomp($lines[$#lines]);
$lines[$#lines] =~ s/\r$//;
$lines[$#lines] .= $cont;
}
else
{
push(@lines,$line);
}
}
my ($seen_attendee) = 0;
for(my $i = 0; $i < $#lines; $i++)
{

# print "LINE $lines[$i]";
if ($lines[$i] =~ /^(ORGANIZER;.*MAILTO:)[^\@]+\@[^;: \r\n]+(.*)/s)
{
#print "Switch from\n";
$lines[$i] = "$1$OPTIONS{'from'}$2";
}
if (!$seen_attendee && $lines[$i] =~ /^(ATTENDEE;.*MAILTO:)[^\@]+\@[^;: \r\n]+(.*)/s)
{
#print "Add from\n";
# Add myself as first person with same arguments as old first person
splice(@lines,$i,0,"$1$OPTIONS{'to'}$2");
$seen_attendee = 1;
}
}

$io->close;
#print @lines;
$io = $part->open('w');
$io->print(@lines);
$io->close;
# $entity->add_part($part,$p++);
}
}
$entity->print;
};

if ($@)
{
# Log error and output unchanged message.
chomp($@);
my $err = $@;
print STDERR "$@\n" if ($@);

$parser->filer->purge;

exit 1;
}
$parser->filer->purge;

exit 0;

On blogger.com and internationalization

Well, this is obviously in the nature of a test, but I do have a complaint: blogger.com doesn't seem to support ISO format date (e.g. 2007-11-02 20:28:35). It seems to support bits and pieces (e.g. 2007-11-02 or 20:28:35) but not both at the same time.

Perhaps it is just my desire for more trivially parsed timestamps which sort, all of which is irrelevant in this context, but why have multiple date formats supported and not support an international standard?

Well, whatever.