[00:05] shaya (~spotter@160.39.244.205) joined #vserver. [00:05] anyone around that knows vserver internals? [00:07] shaya: how detailed? [00:09] how the INADDR_ANY works inside a vserver? [00:10] have a project for school that needs to do a similiar thing, allow a process to only operate on a set of addresses, but still allow INADDR_ANY to work [00:10] I was trying to follow the code to figure out how it worked, but there's a lot of things going on in the tcp code so wasn't able to figure it out [00:11] shaya: sorry, i'm not nearly good enough for that. You need to talk to Berl [00:11] shaya: what kind of school is that anyway ? [00:11] Columbia [00:12] pardon ? [00:12] Columbia University [00:12] okay [00:12] /whois shaya [00:13] hmm [00:13] just ip [00:13] oh well [00:13] is shaya your real name ? [00:13] yes [00:14] maybe unriel and surriel can help you as well [00:14] shaya: where does the name come from ? [00:15] isaiah [00:15] what is that? [00:15] biblical prophet [00:15] some kind of area or religion [00:15] okay [00:15] so you study computer science? [00:16] yes [00:16] me too :) [00:16] how far are you ? [00:16] working on phd [00:17] okay, a bit further than me then [00:17] Simon (~sgarner@210.54.177.190) joined #vserver. [00:18] shaya: what are you writting your phd about ? [00:18] dont know yet [00:19] os related [00:19] shaya: all linux ? [00:19] all the work my group does is on linux [00:19] work a different group does is split b/w linux and openbsd [00:20] got to go [00:20] shaya (~spotter@160.39.244.205) left irc: Quit: Client exiting [00:20] shaya: what about doing a combination of vserver and openmosix ? [00:20] damn [00:34] Nick change: Bertl_oO -> Bertl [00:34] hi all! [00:35] hey Bertl [00:35] Bertl: there was a dude? or dudess by earlier wanting some advice about how the INADDR_ANY works inside a vserver? [00:38] hmm ... interesting, you did help her/him? [00:39] Bertl: no, i'm not smart enough for that [00:40] hey all those 'smart' people on this channel, it has to rub off somehow? [00:40] hmm, okay, not too smart, I have a log ;) [00:41] hi Simon! [00:41] Bertl: it does, just slowly [00:41] Bertl: heh [00:51] and how was your day? [00:52] Bertl: alot better than yesterday [00:52] well, that is something, isn't it? [00:53] Bertl: sure is, but then again, it didnt take a hole lot to get better [00:53] Bertl: you see, yesterday i made 2 big fuckups :( [00:53] think positive ... [00:53] it could have been 3 more ;) [00:54] Bertl: first at work i accidentialy adds a / in the end of the source in an rsync command, and along with a --delete, this clears the hole external cvs server :( [00:54] restoring from backup takes 12 hours :/ [00:54] hmm, some time for something else, right? [00:55] then i get home, and i just have to move our user controll system from one DB to another here at my dorm, and i accidentialy set the password of all the mysql users to the same :/ [00:55] Bertl: just time to get some sleep [00:55] i am so cofuse about vsserver vsersion! [00:55] vserver-0.28.src.tar.gz [00:55] 27 [00:55] 26 [00:55] Bertl: today i did have 2 very interresting lectures in my "innovation and starting own business" class [00:56] shuri: those are jacques versions ? [00:56] @shuri well, you have to look close(r) ... _util_-vserver [00:56] is enrico, vserver (note the missing util) is Jack ... [00:56] so is 2 different project? [00:57] yes, exactly .... [00:57] well is confusing.. [00:57] tow people, two versions, different features ... different approaches ... [00:58] which is the better? [00:58] :P [00:58] depends on how you define 'better' ... [00:58] the 'official' version is util-vserver ... but vserver has the longer tradition ;) [00:59] maybe at some point jack and enrico work together on one set of tools ... nobody knows ... [01:00] Bertl: there is freevps too [01:00] if you think this would be beneficial, feel free to make a suggestion on the list ... [02:04] JonB (~jon@129.142.112.33) left irc: Quit: Client exiting [02:24] shuri (~ipv6@207.236.226.187) left irc: Quit: ipv6 [05:28] Nick change: Bertl -> Bertl_zZ [09:02] loger1 joined #vserver. [09:07] loger (~loger@213.159.118.2) left irc: Ping timeout: 493 seconds [09:07] Nick change: loger1 -> loger [11:08] mhepp (~mhepp@r72s22p13.home.nbox.cz) joined #vserver. [11:09] mhepp (~mhepp@r72s22p13.home.nbox.cz) left irc: Client Quit [13:41] infowolfe (infowolfe@68.33.215.209) left irc: Read error: Connection reset by peer [13:59] Bertl_zZ ? [14:02] ccooke (~ccooke@80.1.164.238) joined #vserver. [14:02] BobR (~georg@oglgogl.BMTP.AKH-Wien.ac.at) joined #vserver. [14:17] serving (~serving@213.186.190.195) left irc: Ping timeout: 480 seconds [14:17] Nick change: Bertl_zZ -> Bertl [14:17] morning all! [14:18] moin moin [14:18] now you should know what Bertl_zZ is ;) [14:18] BobR (~georg@oglgogl.BMTP.AKH-Wien.ac.at) left irc: Ping timeout: 480 seconds [14:18] @jonathan could you test the patch I sent you? [14:18] or maybe you already did? [14:19] I didn't know if you were at your keyboard, but not /backed yet [14:19] yes, the patch seems good so far [14:19] okay, so it a) fixes your issues, and b) doesn't nuke something else? [14:20] what's with the shift from vmalloc to kmalloc? [14:21] hehe .. vmalloc should not be used for kernel structures ... [14:22] seems to do (a) and (b) so far as has been tested, which is not yet far enough [14:23] okay, the question for me is, does this fix other 'hang' reports as well, maybe by the change from vmalloc to kmalloc ... [14:23] so is the {v,k}malloc unrelated to the fix? [14:23] IIRC, matt ayres reported some lockups, but I don't remeber him using dynamic contexts ... [14:24] i.e. is the fix to just move the switch_user_struct out from the spin_lock? [14:25] well, the lockup is fidex by moving everything which could sleep out of the spinlock, which is a good idea anyway ;) [14:25] the vmallo -> kmalloc change is not related to the issues you reported ... [14:26] Simon (~sgarner@210.54.177.190) left irc: Quit: so long, and thanks for all the fish [14:26] :) [14:27] From: Linus Torvalds [email blocked] [14:27] No, please please please don't do these things. [14:27] vmalloc() is NOT SOMETHING YOU SHOULD EVER USE! It's only valid for when [14:27] you _need_ a big array, and you don't have any choice. It's slow, and it's [14:27] a very restricted resource: it's a global resource that is literally [14:27] rrestricted to a few tens of megabytes. It should be _very_ carefully used. [14:27] There are basically no valid new uses of it. There's a few valid legacy [14:28] users (I think the file descriptor array), and there are some drivers that [14:28] use it (which is crap, but drivers are drivers), and it's _really_ valid [14:28] only for modules. Nothing else. [14:28] heh, i just read that today myself [14:29] Date: Tue, 11 Nov 2003 10:24:46 -0800 (PST) [14:29] hi kestrel! [14:30] alec it was right? [14:31] hi there [14:31] yep :) [14:32] I'm not good with names, but I try ;) [14:33] :) [14:33] i am slowly picking up the kernel functions and what they do...it'll take me a while i think, there is a lot of shit in there [14:34] long numbers are easier to remember for me, but converting names to hex values and back is boring 8-) [14:34] hehe :) [14:35] i am crap at remembering numbers and names [14:35] I guess the main issue in the kernel is, that you have to 'adjust' to each code section, as each author has it's own style of coding ... [14:35] but fortunately, quite good at remembering esoteric unix command arguments and function parameters [14:35] yeah, definitely [14:36] the 2.4 kernel is much worse than the 2.6 in this regard ... [14:36] naming contractions such as 'xid' for 'ctx_id'? ;) [14:37] hmm, well, this is actually the result of my 2.6 kernel programming and the smooth transition from s_context to xid ;) [14:38] righto [14:38] herbert: so it seems [14:38] on the long run, this should get a xid_t like uid ;) [14:39] I've recently had to change from hating the '{' at the end of the line to accepting that it _is_ after all the One True Way [14:40] hehe ... you can always use 'indent' ;) [14:40] hehe [14:41] but the kernel coding style differs a lot from 'default' C coding ... [14:41] I've found it's _much_ easier to learn some flexibility [14:41] sure [14:42] "goto" considered essential :) [14:43] if somebody asks me, should I use goto in my C userspace tool, my answer ten years ago was, nooo! you don't do that ... nowadays I'd say, sure if it helps you to make the code more readable, go ahead ... [14:44] basically the constraints are just so different in kernel space [14:44] newtonian to relativistic [14:45] Action: beardfactor10 mumbles away to himself [14:46] well, actually any code using goto, can be written without goto, and if done properly, will result in the very same assembler statements ... [14:46] but 20 levels of indentation are a nono, if you ever want to read that code again ;) [14:47] a function that needs some decomposition? [14:48] and a few extra sets of function call overhead [14:49] use of goto can be used well, if you simply think about what the processor is doing... [14:50] There's too much of a fashion these days for pretending the processor understands raw perl, for instance... [14:50] except for the processor designs which understand raw perl, of course ;) [14:51] ccooke: just throw some more MHz^H^H^HGHz at it :) [14:51] bertl: you show me a processor designer who decides to make a processor that uses perl as its assembly language, and I'll show you a small greasy smear under a pile of all the perl books ever printed. Eventually. [14:52] (I do actually code in perl. but I try to be aware when this is a good thing and when it's bad) [14:53] well you should not underestimate the weirdness^H^H^Hcreativity of asic and chip designers ... I've seen a woking prototype for java ... [14:53] I've seen some lovely stuff, yeah [14:53] but *not* good for a real system, please :-) [14:53] People don't know enough languages these days [14:53] bah [14:54] there is only one language MIX ;) [14:54] (any minute I'm going to say "In my day", and then one of you will have to shoot me) [14:54] Intercal. [14:54] Intercal for everything. [14:54] so you love unary divisions ... [14:54] no, I just love torturing people. [14:55] ahh, good, what's your next question, then ;) [14:55] or comment to vserver, bring it on, I can handle it ;) [14:55] Fine. [14:55] when will 2.6 support be in :-) [14:56] already is there ;) [14:56] it is? [14:56] damn, how did I miss that [14:56] how stable? [14:56] lazyness, to much complaining ... [14:56] I've been holding off testing 2.6 because vserver wasn't working yet... [14:56] nah, just too busy [14:57] too busy to read the mailing list, but not too busy to IRC? ah, isee :) [14:57] beardfactor: I do read the list... [14:58] which is why I'm wondering how I missed it [14:58] well, regarding stability ... I think it is stable, but not very functional at the moment ... [14:58] http://vserver.13thfloor.at/Experimental/patch-2.6.0-test9-vs0.01.diff [14:58] right [14:59] so I still can't test it. [14:59] Oh, well [14:59] it's *good* to see it there [14:59] 120 19:42 on Tue 18/11/03 Herbert Poetzl ( 23) [Vserver] Linux 2.6 Test Patch ... [14:59] beardfactor: yeah. I just found it. [14:59] ahh, somebody actually _is_ reading my emails ;) [14:59] *sigh* [14:59] don't get too happy Herbert [15:00] yeah, I know, nobody answered yet .... :( [15:00] I noted it, but there' no chance of us using 2.6 for many, many months :( [15:00] Action: ccooke wants to try 2.6 [15:00] but? [15:00] simply because I'm hoping that I'll have to patch the kernel less [15:01] what do you need to patch atm besides vserver? [15:01] Action: ccooke checks [15:01] ipvs [15:01] boot screen [15:01] acpi [15:01] swsusp [15:02] currently [15:02] hmm, boot screen? [15:02] reiser (which also include the quota stuff dsvr requires) and aic7xxx updates [15:02] not used currently, but work want Pretty Boot Screens on the servers [15:02] to replace the confusing text crap my init scripts spew [15:03] ahh I see ... colorful pictures clouding important information ... [15:03] (and I mean spew. You try autodetecting what network you're on and which one of five locations the image repository is on, while booted from CD...) [15:03] ccooke: I want a pretty picture underneath the text crap :) [15:04] something like mandrake does ... [15:04] my boot scripts know about seven different networks. Each one generates about five lines of error messages as it tries it... [15:04] Action: beardfactor10 splutters [15:04] Mandrake? [15:05] <*cough*> Real men use [15:05] beardfactor: it's a linux distribution. Red hat based. HTH, HAND [15:05] yup, latest version 9.2 IIRC, came with some background image on the boot console ... [15:05] indee [15:05] d [15:05] I've not got it working yet [15:06] one of our ex-developers used SUSE which had some fancy boot screen too [15:06] thing is, the boot screen stuff seems to be mostly undocumented [15:06] and relies on having an initrd [15:06] well, I managed to get rid of it in a few minutes ;) [15:06] now, I find that when I try to use an initrd booting from isolinux, it hangs... [15:07] if i am writing a kernel module and need to communicate with it, what is the best method under 2.6? i have read conflicting reports stating, variously, ioctls, sysfs, libfs.... [15:07] I did have my laptop (debian) patched, but then removed it to simplify debugging an nvidia and swsusp interaction issue [15:07] I've not had swsusp working since 2.4.20 [15:07] @alec depends on what data you want to transfer and in which direction ... [15:08] seems it doesn't like SMP [15:08] but (is it vserver or ipvs?) won't compile without SMP... [15:09] re [15:09] this is another thing I'd like fixed. Oh yes. [15:09] okay, jonathan, do you intend to check the 'patched' version in a 'production' like environment, in the near future? [15:09] say (~say@212.86.243.154) left irc: [15:09] hi maja! [15:11] Bertl: I'm going to fill out killer to do some thrash testing for a couple of days first [15:12] okay, I would suggest to code a version which uses fixed id's as well, as this is a different code path ... [15:13] ccooke: I've been running swsusp on 2.4.22 only UP for several weeks. Only ate my reiserfs the once ;) [15:13] Bertl: exactly so [15:14] @maja you are not from/in austria, or? [15:14] so far the patch cures the killer.c problem. which might not be the problem we had in the wild... [15:14] that is why I was asking ... [15:15] bertl: hmmm, as i understand it, libfs should be used if the driver needs to export a lot of stuff to user applications... [15:15] for control information, i would have thought ioctls on the device node would be the way to go [15:15] what data do you want to transfer? [15:15] but linus is thoroughl against that [15:16] well, there would be two classes of user<->kernel passing in this driver...one would be low-traffic control information like, "sync device", etc. [15:17] and the other would be the actual block data [15:17] block data == block device ... and you are done ... [15:17] the control information, that depends on how much and in what direction ... [15:17] well, the issue is that the block driver will read/write network traffic [15:18] i am thinking of going with a user space daemon for that part [15:18] well what about the nbd implementation? [15:18] to avoid having lots of network logic in kernel space [15:18] i have been looking at that, definitely [15:18] and i could probably just as easily rip that out and use it [15:19] but one of the ideas is to use a cluster communication layer such as spread [15:19] which would not fit well with kernel space [15:19] if you want to go for a userspace application, you can do so right now ... [15:19] apparently, from reading, it is possible to do zero-copy between kernel and user space, which would be ideal [15:19] say (~say@212.86.243.154) joined #vserver. [15:19] eh? [15:19] there are a lot of existing 'userspace' filesystems, which should match your requirements perfectly ... [15:20] well, it's a block device [15:20] not a file system [15:20] i did find a user-space character device driver abstraction layer [15:20] but it didn't have block devices completed [15:21] actually, perhaps it would be best if i mailed you my proposal...if you don't mind, i'd appreciate you having a look at it? [15:21] hmm, by the way, did you have a look at ENBD? [15:21] yep, i have used enbd in the past [15:21] we discussed this yesterday :) [15:21] oh .. [15:21] you mentioned that you know philip [15:22] Action: Bertl tries to look busy ... [15:22] hehe :) [15:22] np [15:22] the main criteria was being able to add/remove nodes from the cluster and have the block device "dynamically" use available space for mirroring [15:23] you know fusd ? [15:23] yes, that is the "user-space character device driver abstraction layer" i was speaking of [15:24] it does not do block devices, unfortunately [15:24] ahh okay ... [15:24] say (~say@212.86.243.154) left irc: Read error: Connection reset by peer [15:24] hmm [15:24] can you use ENBD to make an (effectively) clustered filesystem? [15:24] no [15:24] it is not active-active [15:24] there you would go for GFS or CODA ... [15:24] you can put something like GFS on top of it though [15:25] lustre looks to be the most promising cluster file system [15:25] yeah forgot that ... [15:25] but it's definitely not ready for prime time [15:25] berlt: no. you can't. [15:26] I've seen no clustered FS on linux that I could use in a production environment [15:26] (short of things you need to pay £BIGNUM for [15:26] well, gfs (commercial version) is apparently quite good [15:26] but, yeah, as you say...not free [15:26] :( [15:26] @alec I remember some postings (on LKML?) regarding kernel/userspace block devices ... [15:26] it sucks [15:27] herbert: okay, i'll have a look [15:27] maybe contact people playing around with that, and sharing some ideas ... [15:27] yeah... [15:28] actually, i'll dig up this article... [15:28] say (~say@212.86.243.154) joined #vserver. [15:28] @ccooke as much as I like the concept of clustering for whatever, I've not seen many usages for that (including a cluster filesystem) [15:29] bertl: you've not seen many uses of clustering at all? [15:29] are you serious? [15:29] well, most of the typical scenarios, can be done simpler by other means ... [15:29] actually, it's enbd that i was thinking of...he mentions here (http://www.it.uc3m.es/~ptb/nbd/) that he does user-space networking [15:30] hmm [15:30] look, a week ago or so, we had the (Open)Mosix discussion for vserver ... [15:30] I really like the concept, don't get me wrong here ... but where would this give you a real advantage over 3-4 independant machines? [15:30] Okay. Let me explain my position in this: I'm a sysadmin. [15:30] I want to do things in the best way [15:31] not the easiest? [15:31] I have found *lots* of problems where clustering is the right answer [15:31] like for example? [15:31] bertl: Easiest is usually not a good solution. [15:31] "Easy" would be, say, "outsource it all" [15:31] web servers. [15:32] let's take this example: [15:32] we have a website which, upon each user query, fires off about 100 seperate HTTP requests to different servers [15:32] strange but okay ... [15:32] that puts a *severe* processor load on a system [15:33] now, with one webserver, it would slow to a crawl [15:33] and what if that webserver died? [15:33] so you take 100 and are back at 1:1 [15:33] oh dear, no website, my wage isn't paid... [15:33] take 100 what? [15:34] we cluster it [15:34] web servers ... and round robin the request ... [15:34] there are, currently, 10 servers. [15:34] um. [15:34] you are aware that round robin is a form of cluster, right? [15:34] nope! [15:34] it's not the most *Efficient* way to do it, but... [15:34] for windows people this is a cluster ... [15:34] unless you mean round robin DNS? [15:35] which is not a good solution *at all* for a heavy load environment [15:35] DNS would be sufficient in this case ... [15:35] no way [15:35] but if you have a distributor which 'assigns' the clients ... you have a single point of failure again ... [15:35] not at all [15:35] i think two lvs directors in failover sitting in front would be the go [15:36] this is what I use vserver for :-) [15:36] *all* of my webservers also run the firewall image [15:36] one of them acts as the director [15:36] and what 'advantage' do you get over 10 systems, same setup, DNS rr ? [15:36] herbert: the typically put two directors (distributors) in failover [15:36] s/the/they/ [15:36] the active firewall only answers web requests if it's the only webserver [15:37] if the active server falls over, another takes its place [15:37] bertl: well, let's see... [15:37] 1) I have more control. I can add and remove servers without DNS progagation issues [15:37] and hot jumpin is also no clustering in my book ... by the way ;) [15:38] ccooke: so what is the actual issue? [15:38] 1) you can solve that, by assigning the 'bad' address to a 'good' server ... [15:38] 2) simpler management - our servers have one IP, which makes it *So much easier* dealing with luser admins at clients, when they need to unblock us... [15:39] 3) flexibility. I can direct requests at servers with lower load [15:39] 2) can be done by the firewall via NAT ... [15:39] 3) can also be done via port redirection locally ... [15:39] bertl: um. port redirection. [15:39] ugh! [15:39] and your setup doesn't use _any_ cluster functionality yet ;) [15:40] what do *you* call a cluster? [15:40] something where tasks float around ;) [15:40] heh, that is a narrow definition of a cluster [15:40] there are many many many different types of cluster [15:40] http://news.bbc.co.uk/1/hi/technology/3239928.stm [15:40] damnit [15:40] http://www.manlug.mcc.ac.uk/20000916/Scali-Sept-00/sld002.htm [15:41] (I wish they'd fix copy and paste in Firebird) [15:41] so, ccooke, what exactly are you trying to fix in your solution that you hoped enbd would solve? [15:41] even: http://www.brainydictionary.com/words/cl/cluster144970.html [15:41] kestrel: a clustered filesystem would help us *immensely*. [15:41] hey, we have an IRC cluster ... group hug! [15:42] in what situation? [15:42] i'm not disputing the fact, i'm just wondering what your application for it is [15:42] well, in this 'broad' definition, NFS is a clustered filesystem ;) [15:42] we need each of the systems to share data between them - cookies, session IDs, apache configs, webroot [15:42] ah [15:43] have you had a look at the cluster foundry on source forge? [15:43] what I would *like* is a clustered filesystem mounted on, say, /data/cluster and scripts locally to copy stuff that needs to be used often to /data/live (for instance) [15:43] cluster-nfs may be what you are looking for: http://clusternfs.sourceforge.net [15:44] I'm doing it ATM with rsync. it works fine. But it would be a much more elegant solution with a clustered FS [15:44] bertl: but you should admit that you're using a different meaning of cluster to the common one [15:44] I would suggest a look at CODA ... [15:44] agreed [15:44] tried it. [15:44] AFS? [15:45] beardfactor: not a chance. [15:45] ever tried to make it work? [15:45] ah. not yet [15:45] well, the only vaguely viable options are lustre, opengfs...and maybe cluster-nfs (though i haven't tried it) [15:46] what I saw of cluster-nfs suggested it's solving a different problem qquite well [15:46] but what I need is NFS with failover of NFS servers [15:46] whcih is not easy [15:46] no [15:46] it's not [15:46] and pretty much impossible without your servers going down for 5 mins or so [15:46] I'd *love* coda to work [15:47] and in another 6 months, I'll try it again if it looks like it's changed [15:47] have you tried opengfs? [15:47] kestrel: probably. Let me check my notes a sec [15:48] I still don't see any reason for using a 'cluster' filesystem yet ... [15:48] bertl: simple because it makes the system simpler and more robust. [15:48] except for the fact, that it would be cool ... [15:49] since i haven't read all the convesation here, i may be wrong [15:49] http://clustering.foundries.sourceforge.net/repository.pl?section=clustering&op=list [15:49] what is the real advantage over rsync, or active application data propagation? [15:49] there are four cluster file systems listed [15:49] but i think ccooke wants to use a clusterfs as a san/nas [15:49] if I can get it to the point where my clients just need to do a mount -t something host:/fs /data/CLUSTER/, then it'll be much, much nicer [15:49] @maja probably ... [15:49] maja: well, our budget won't stretch to a pair of netapps.... [15:49] which *would* be erfect [15:49] berl: network performance issues for example [15:50] 2 servers, 1 nas -> 1 server pushes with 100mbit/1gbit data into the nas, the other one only needs 1/100 of it [15:50] if you use rsync, both server links would be under heavy load [15:51] even if you do not need all the data on the second server [15:51] sure, but he mentioned 'config', 'cookies'(guess this should be session data) ... [15:52] as i said, i haven't had the time to read all the conversation ;) [15:52] @maja does austria ring a bell? [15:52] bertl: ? [15:53] just because of the name ... [15:54] mhm, i still don't understand [15:54] okay, then forget it ;) [15:54] austria is a country... the one i live in?! [15:54] ok [15:55] ahh so Raoul Raminder Andreas? .... [15:55] mhm [15:55] indeed... [15:56] funny ... isn't it ... [15:56] wtf? [15:56] :) [15:56] the world isn't that large ... [15:56] so who are you? [15:57] just a guy, livin in austria too ... [15:57] i did not know that i'm that famous ;) [15:58] we all know the 'great' Raoul! [15:59] still at the TU-Wien ... [15:59] ccooke: i believe redhats cluster manager can do nfs failover within 30 seconds...they have a set of patches to both kernel and daemons which allow this [16:00] I have ha-nfs setup at work with the heartbeat package [16:00] hey dan! [16:00] hi :) [16:00] kestrel: ooh, really? [16:00] mrbawb: are you using stock kernel? [16:00] kestrel; yup [16:00] @dan do you know when/if matt will be around? [16:01] Bertl: I'm afraid I don't know [16:01] failover time is under a minute [16:01] mrbawb: do you get stale handles on failure? [16:01] kestrel; nope [16:01] that seems to be the biggest problem, from reading bug reports and mailing lists [16:01] ah [16:02] hmm [16:02] I have to go now. I'm headed up to Massachusetts for thanksgiving :) [16:02] hey, have a nice one ... [16:02] enjoy it [16:02] bye [16:03] (what *is* the correct term to wish someone well for thanksgiving?) [16:03] "bye"? :) [16:03] turkey it is! ;) [16:03] hehe [16:03] yoda you are, and claim my five monetary units I do [16:04] heh [16:04] ... [16:04] bertl: mhm, as i don't know have a clue how you know about me, maybe you will shed some light on it once you've got some time ;) [16:04] ccooke: cluster manager (clumanager) is the redhat "application" that does all their failover stuff [16:05] but i'm fairly sure the nfs stuff is freely available...somewhere [16:05] looking back over these notes, there are some *really* nice ideas in clustered FSs. I just wish they were being actively developed or (in most cases) stable to the point of usability [16:05] @maja, no problem ... but you can find the answer by simply digging the 'Allwissende Müllhalde' [16:05] kestrel: I'm sure I looked at it. Just can't remember it [16:05] lustre looks to be the most promising in my opinion [16:06] striped across the network, redundant, highly available [16:06] yeah [16:06] nice [16:06] pity it don't work ;) [16:06] yet [16:06] indeed [16:06] that is kind of a key component [16:06] Action: ccooke tries to remember which of them was implemented using a filesystem daemon written in perl... [16:06] man, that's nasty [16:07] I mean, great way to get it up and running, *wonderful* for early testing [16:07] but imagine rolling it out on the system that pays your wages [16:07] *shudder* [16:07] heheh [16:07] scary [16:07] lustre must work reasonably well [16:07] it's being used in production clusters...http://www.clusterfs.com/ [16:08] hmm [16:08] but i couldn't get it working reliably [16:08] nor me [16:08] actually, i got it working reliably, but failover didn't work [16:08] when I tried, I just couldn't get it working [16:08] which, i guess, is not exactly "reliable" in the strictest sense of the word [16:08] serving (~serving@213.186.190.221) joined #vserver. [16:09] it was painful, that's for sure [16:09] ah, yes [16:09] it wouldn't even compile with my kernel [16:09] one of ipvs or vserver didn't like it [16:09] ah [16:09] and since those two are essential to me, I canned it [16:10] it sounds like we had similar goals [16:10] i came to the conclusion, as it appears you have, that there is no solution yet [16:10] it's a two scenario : reliable, easy, cheap [16:10] heh, so true [16:10] oh, yes... Bertl? you define a cluster as something in which a task can migrate, right? [16:10] so very true [16:10] then a web server cluster witgh shared sessions is a cluster ;-) [16:10] yup, shared computing resources, as a matter of fact ... [16:11] by your definition [16:11] the "task" being "servicing that user" [16:11] well, the task can't migrate in this scenario ... [16:11] no? [16:11] why not? [16:11] only the 'work' migrates ;) [16:11] *grin* [16:12] look, a simple example: [16:12] I know what you mean. [16:12] okay, I save that example for later ;) [16:12] I just use a wider definition [16:12] (*muttermutterlikeeveryothersanepersonmutter*) [16:13] i suspect most people do ;) [16:13] (wait. I just called myself sane. My apologies.) [16:13] well, right, I like to have 'side on side' 'hot-jumpin' ' failover' and 'cluster' [16:13] okay [16:13] right after cluster there is SMP/NUMA/etc ... [16:13] what do you use as a common term when referring to that *class* of things? [16:14] garbage ;) [16:14] ha [16:14] (out of interest, what is your job?) [16:14] (I'm wondering if this is a differing-dialect thing) [16:15] (or if you're just deliberately awkward ;-) [16:15] deliberately awkward ... [16:16] ah, well [16:16] that's okay, then [16:16] haha [16:16] i believe the correct terminlogy is "deliberately obtuse" :P [16:16] I hate it when, sys admins come to me, and say, hey imagine, I conifured a windows NT cluster, works perfectly, I can login on both machines ... [16:17] okay, point here [16:17] speaking as a sysadmin, the people you've just described are what I would refer to as "Trained Monkeys, possibly with an MCSE" [16:17] heh [16:18] now you are insulting the monkeys ... that's not nice ;) [16:18] I always insult monkeys. [16:18] (At this company, they call the developers "The monkeys") [16:18] because of the bananas, probably, right? [16:19] ok, i'm off again (stupid school) [16:19] ciao [16:19] bertl: probably [16:19] byebye ... [16:19] see you [16:21] (I don't actually hate windows. I just dislike people who think an MCSE is reason to call yourself a sysadmin. I mean, right tool for the right job, you know? That's why I wrote a fully working RFC-2821 compliant mail server in bash this week...) [16:21] keyword "tool" [16:21] (and I will defend the bash MTA as being "correct" to anyone who disputes it :-) [16:21] I don't hate windows, I just hate Micro$oft, for putting computer sciences back into the stone ages ... [16:21] god, yes [16:22] as it stands, windows is still the best thing to use on the desktop for most people. [16:22] it's also the best applications server. [16:22] let it touch the internet? over my dead body... [16:22] that application loader tool? [16:22] bertl: loads of stuff. it has the development and mature tools. [16:23] hmm [16:23] I wonder if vserver will work on arm? [16:23] then I could run it on my PDA :-) [16:23] do you have one spare, with linux? [16:24] to test on, I mean ... [16:24] sure [16:24] I have a nice Zaurus C750... [16:25] well, let me have a look at the code ... will take about 10-15 minutes to get basic arm support ... [16:25] not yet [16:25] I've had it for a couple of weeks so far [16:25] and? [16:25] need to do more work before I can try new kernels [16:25] I know the theory [16:25] well okay, let me know when you are there ... [16:25] but until I've had time to do it with known-good stuff... [16:26] plus, there's a possible hardware fault - just had a new battery sent to try to fix it [16:26] I have to say, the Zaurus is an *amazing* device [16:26] I can type pretty much full-speed on ii [16:27] it has an incredible 640x480 screen... [16:27] so far, I've shifted all my usenet reading to it [16:27] mail will be shifted to it as soon as the bash MTA is better tested... [16:28] (this is why it's the right tool; limited space. I have a full MTA in 30K, while the smallest "real" one I can find is about 1.5Meg [16:29] hmm, mutt works well for me ... [16:29] I'd have to recompile mutt for it, and I haven't got cross compilation installed yet [16:29] unless I can find an old package [16:29] i love mutt [16:30] I find it less featured than pine [16:30] or, if it has the features, they're hidden [16:30] that's called 'efficient' [16:30] um, no. [16:30] in what way? [16:31] pine has a *really* *amazingly* useful feature to select messages by size, data, text string (in subject, from,to, body or all text)... [16:31] once selected you can apply commands to everything selected [16:31] hide unselected mails... [16:32] etc [16:32] sounds like 'l' and tag and tag-do ... [16:32] great [16:32] yep, you can do all that in mutt [16:32] documentation? [16:32] wonderful [16:32] ~d - [16:32] etc. [16:32] I've been searching documentation for ages [16:32] man muttrc, then search for ~d [16:32] thanks! [16:33] plus mutt has the most important feature of all: colour [16:33] mmmm, colour [16:33] um [16:33] pine actually does better colour support. sorry. [16:33] here is something interesting: http://www.sciencething.org/geekthings/index.html [16:33] userblock [16:34] I don't judge this as a feature to keep using it, mind, but it does [16:34] hehe, neither do i [16:34] hmm, old but sounds useful ... [16:35] yeah [16:35] dec 2001 is quite old [16:35] maybe you should contact him ... [16:35] i may do that [16:35] at the very least i shall look at his code :) [16:36] is there any way of making mutt resync to the server/mailbox automatically, then? [16:36] if, for nothing else, it can server as a bad example ;) (don't think so, by the way) [16:36] ccooke: there is an interval you can set [16:36] hehe [16:37] that's the worst thing about mutt - finding I have to reread *everything* because I forgot to press $ [16:37] mutts imap support certainly leaves a lot to be desired [16:37] kestrel: god, yes [16:37] set check_new=yes [16:37] problem is, it's the best imap client for the console [16:37] well, I'm not conviced of imap yet ;) [16:38] works on maildirs and mh mailboxes [16:38] bertl: I guess you haven't ever had to admin a mail server :-) [16:38] imap is the only solution for shared mailboxes imho [16:38] well, a company server [16:38] imap is *great* [16:38] it solves so many of my problems [16:39] what imapd do you use? [16:39] actually, what i hate so the most about mutt is the imap support [16:39] cyrus [16:39] the single message store is *good* [16:39] specifically, the habit it has of timing out then not letting you reconnect!!! [16:39] AGGGGGGGGGGGGGGGGGGGGhhhhhhhhhhhhhh [16:39] so you have to restart mutt [16:39] kestrel: hey, guess what I use mutt for :-) [16:39] Action: kestrel_ bitch slaps mutt [16:39] one of these days I'll have to start reading the code [16:39] damn, i can understand your lack of enthusiasm then :) [16:40] okay, now let us compare it to a real mail client, like Outlook ;) [16:40] Action: Bertl ducks ... [16:40] hahaha :) [16:40] bertl: *thwap* [16:41] I have to admit - pine is the nicest mail client I know of. I'd prefer mutt to work *right*, though [16:42] and pine to have less frequent security bugs [16:43] userblock is not useful [16:43] Action: kestrel_ has judged it unworthy [16:43] kestrel: yeah? [16:43] well, to be more specific, it's not what i'm after :) [16:44] ha [16:46] actually, i started thinking about this project after not finding any good free solutions to cluster storage [16:48] oh? [16:50] well, as you are undoubtedly aware, there are generally two components to cluster storage... [16:50] there's the actual block device and then the shared file system on top [16:51] (excluding lustre, which does both simultaneously) [16:51] right [16:51] as sans are so goddam expensive, i figured linux needed a distributed, fault tolerant block device [16:52] it does, yes [16:52] then i added the additional criteria of requiring it to handle adding more storage nodes gracefully [16:53] and here i am, looking at kernel source [16:53] I wonder if there is any *existing* filesystem that will support writes from more than one source? just something that checks for writes before it gives data from the cache, say [16:56] such as gfs, or are you thinking of something different? [16:56] something commonly used [16:56] ah [16:57] i think the main issue is metadata updates... [16:57] the actual data wouldn't be *too* hard [16:58] but even using a filesystem like that, you still need shared storage of some kind [16:59] enbd is one option, of course [17:00] yeah [17:00] you could keep a block-map, and pass invalidation request, of course interleaved with alive marks ... [17:01] and only pass changes if requested by some other node ... [17:02] at the block level, yeah, but at the file system it would be much more complicated...updates to inodes would need to be synchronised, with all that that entails [17:02] distributed locking would then come in, blah blah [17:02] nasty shit [17:02] well, you need a lock manager for that, but that is another story ... [17:03] you where talking about a block level cluster solution, or did I misinterpret you? [17:04] i am, yes, but ccooke was referring to a filesystem [17:04] the block device shouldn't be too hard [17:04] hence why i am doing it ;) [17:04] unfortunately, it will not be very useful ... at least not for filesystem, which can not handle multipath/shared access to the blockdevice ... [17:05] but the existence of a block device makes the filesystem possible [17:05] yes, that is the plan [17:05] something like gfs can then live on top of the block device [17:05] mitigating the need for a san, or hardware shared storage [17:05] $$$ [17:05] sure ... [17:06] did you spend a thought on iSCSI? [17:06] yep, definitely...there are several solid implementations [17:07] maybe that could be the base for something like this ... [17:07] but iscsi is only really useful in this situation if you want to export the block device to something other than linux [17:07] it would be a very useful thing, no doubt [17:07] perhaps that could dvm v2.x :) [17:08] well okay, I just let you guys do the development ... [17:08] all thoughts are welcome herbert :) [17:09] it's a requirement to get other peoples opinions and ideas, otherwise you can get locked into a single way of thinking that is ....wrong :) [17:10] kestrel: even if they're deliberately obtuse opinions ;-) [17:11] heheh :) [17:11] actually, i agree with that...because at least it makes you go over your own arguments and re-prove them to yourself [17:12] exactly [17:15] the more i think about it, the more i think that doing all the networking in user space is the way to go...the user space daemon would just forward the data to/from the kernel block device for "presentation" [17:34] Nick change: surriel -> riel [17:46] ccooke: have you looked at inter-mezzo? [17:52] so it seems 2.6 will be out before xmas ;) [17:54] that is cool! :) [17:55] hmm, I'm not sure ... [17:56] or am I the only one, who has this 2.4 deja vu ... [17:57] you're probably right, but the sooner it's out the sooner it will get hammered on by the teeming hordes [17:58] bed time for me, thanks for the chat guys [17:58] good night ... [17:58] thanks for listening to my ideas herbert :) [17:59] ur welcome ... [17:59] g'night [17:59] zzzz [18:07] re [18:31] kestrel: yes, it's not mature enough for use [18:31] (sorry, I'm at work - was afk) [19:08] mhepp (~mhepp@r72s22p13.home.nbox.cz) joined #vserver. [19:16] mhepp (~mhepp@r72s22p13.home.nbox.cz) left irc: Quit: Tak ja padaaaaM [19:37] Bertl (~herbert@MAIL.13thfloor.at) left irc: Quit: leaving [20:32] riel (~riel@66.92.77.98) left irc: Quit: ---> airport [20:51] beardfactor10 (~jonathan@212.69.216.20) left irc: Quit: Off 'til Tuesday [20:52] BobR (~georg@195.34.145.96) joined #vserver. [20:59] shuri (~ipv6@207.236.226.187) joined #vserver. [21:04] Nick change: BobR -> BobR_o4L [21:49] sannes (ace@sannes.org) joined #vserver. [21:50] what does http://vserver.13thfloor.at/Experimental/patch-vs1.1.5-dynamic-fix03.diff fix? [21:52] I see it changes all memory allocations in vcontext.c .. [21:53] but, I'm not sure what it fixes.. I think maybe i have struggeled with something this fixes.. [22:04] and what is that delta-dynamic-01.diff file it makes? [22:13] Nick change: BobR_o4L -> BobR_zZ [22:17] BobR_zZ (~georg@195.34.145.96) left irc: Quit: leaving [00:00] --- Fri Nov 28 2003