Saturday, 2008-03-29

*** tpb has joined #tp00:00
*** ChanServ sets mode: +o tpb00:00
*** Appleman1234 has joined #tp00:23
*** mithro has quit IRC00:29
*** mithro has joined #tp00:34
*** Appleman1234 has quit IRC00:45
*** bddebian has quit IRC00:50
*** mithro has quit IRC00:53
*** mithro has joined #tp00:57
mithroJLafont: ping?01:02
*** bostonvaulte1 has left #tp01:04
mithrobah01:05
mithrothat should have been JLP01:05
JLPmithro: pong01:06
mithrostill no email :(01:06
*** brennan_ has quit IRC01:17
*** Appleman1234 has joined #tp01:20
*** mithro has quit IRC01:27
*** mithro has joined #tp01:28
*** mithro has quit IRC01:40
*** mithro has joined #tp01:41
*** brennan_ has joined #tp01:46
JLafontmithro: pong01:46
mithrowas actually after JLP01:47
JLafontahh kk01:48
JLafontnp01:48
*** xdotx has quit IRC01:53
*** brennan__ has joined #tp02:05
*** brennan_ has quit IRC02:06
*** Lukstr has quit IRC02:19
*** JLafont has quit IRC02:40
*** niphree has joined #tp02:46
niphreehello02:47
JLPniphree: morning02:47
Epyon niphree :)02:47
niphreeJLP: morning :]02:48
niphreeEpyon: hi :]02:48
*** brennan__ has quit IRC02:48
EpyonChange of the watch, then. I'm going to sleep ^_^02:48
*** Appleman1234 has quit IRC02:48
niphreeEpyon: sleeping is waste of time02:53
JLPwow, KDE has 103 applications already02:57
llnzJLP: kde had 50(+?) students last year03:00
JLPllnz: if i remember corectly they had around 150 yes03:00
Epyonniphree, aawriight. You convinced me xP03:02
JLP:)03:03
EpyonJLP, how many do we have up to now? :P03:03
JLP2103:03
EpyonThat's still good :D03:03
llnzniphree: hi03:03
JLPEpyon: yeah not bed, and the quality is quite bit better then last year03:04
niphreellnz: hi03:04
mithrohey niphree03:05
niphreehello mithro :]03:06
mithroniphree: how have you been?03:16
niphreemithro: good :)03:23
niphreemithro: I want to finish metaserver - if that's ok03:27
*** Appleman1234 has joined #tp04:01
*** peres has joined #tp04:18
JLPahoy peres and Appleman123404:20
pereshi JLP04:20
* peres live from macdonalds ;)04:20
* JLP can't remember how many years have passed since he has been in mcdonalds04:22
peresJLP: i personally hate it, but it's the closest place to my home that have wireless connection :P04:22
pereshas*04:23
JLPah the wireless, that's a different story then :)04:25
mithroniphree: we are happy to have you work on it04:29
*** Appleman1234 has quit IRC04:35
*** Appleman1234 has joined #tp04:53
*** Appleman1234 has left #tp05:21
*** peres has quit IRC05:26
*** Demitar has quit IRC05:50
* llnz wanders off07:53
llnzlater all07:53
*** llnz has quit IRC07:53
*** tarck4am has joined #tp08:29
JLPahoy tarck4am08:45
tarck4amhi JLP :)08:49
tarck4amI'm just lurking around :P08:49
*** tarck4am is now known as karol08:49
JLPkarol: cool, feel free to ask any question you might have08:50
karolI submited a proposal and wanted to ask if it's ok to use May 26 instead of your sugessted ISO date format -> why? imho it looks better :)08:52
JLPkarol: yeah it's also fine by me08:53
karolJLP: I read some other proposals there were quite long ... could you point me in which direction should I extend my proposal (from the technical point of view)?08:57
JLPkarol: i'd try to put as much effort into schedule and try to make that as detailed as possible and as clear as possible what will be delivered and how to test that it work08:59
*** ryan__ has quit IRC10:59
*** ryan__ has joined #tp11:08
SmokingRopemorning11:14
JLPSmokingRope: ahoy11:19
SmokingRopeJLP: have you read my proposal again? :)11:21
JLPSmokingRope: which one was it again11:21
SmokingRope3D Game Client Using C++ and Ogre11:21
SmokingRopehttp://easlnx01.eas.muohio.edu/~hannasm/TPProposal11:22
tpb<http://ln-s.net/1k+2> (at easlnx01.eas.muohio.edu)11:22
JLPSmokingRope: ah that one, i'll just go somwhere for a few minutes and when i come back i'll read it again just in case11:22
SmokingRopei tried to address the scheme interpreter in this newest revision11:22
SmokingRopei have read somewhere on the site that all of the ship design rules are written in scheme11:23
SmokingRopeand i don't believe there is a working C++ implementation for parsing TPCL yet11:26
mithroSmokingRope: umm, tpserver-cpp has two such examples11:27
mithro(one which binds mzscheme and one which binds guile11:27
SmokingRopemithro: i've only read the text client so far11:27
SmokingRope:(11:28
mithroactually the scheme stuff is optional11:28
mithroas the client can request that the server perform the calculation for it11:28
mithro(it's just more annoying)11:28
SmokingRopewill there be many differences between the server-side scheme binding and the client-side binding?11:29
*** niphree has quit IRC11:32
*** niphree has joined #tp11:34
JLPSmokingRope: looked at the proposal, looks very good to me11:45
SmokingRopeJLP: thanks for looking it over11:46
*** vi1985 has joined #tp11:49
vi1985hello andrei, and everyone. sorry i'm late11:50
andreivi1985, Hello11:50
*** JLafont has joined #tp11:50
*** JLafont_ has joined #tp11:51
vi1985andrei: so are you still planning on having the q&a?11:51
vi1985andrei: or can I go to the lab and do my homework? :)11:51
andreivi1985, Heh, (I think perhaps you're making much more of a deal out of this than I intend to) I just want to ask a few questions11:52
vi1985andrei, ok shoot :)11:52
andreivi1985, Yeah.. I too am doing that. I have to teach a lecture on HMMs and CRFs.. such a pain11:52
vi1985:)11:52
vi1985andrei: just one request (for both of us), no flooding... just question --> answer format11:54
andreivi1985, So.. The dimensionality of the GA algorithm. You said 4D. Can you describe that space?11:55
vi1985andrei, that space would contain an encoded move-by-move sequence of actions for the AI player. I chose 4 dimensions, since it seems to be able to contain everything.11:57
vi1985andrei,11:57
andreivi1985, That's a description of the items in the space; not the actual search space. The important part for GAs doesn't refer to the items, but to the neighbourhoods11:57
andreivi1985, In the sense of. Can you give an example of how a point in this 4D space would look like (and its interpretation in terms of actions) + its neighbourhood function?11:58
andrei(I want to apologize again, I don't want to come off as an ass.. I just know GAs well and have used them quite a bit. I don't mean to offend by these questions)11:59
vi1985andrei: no worries. what do you mean by the neighborhood function then?12:00
andreivi1985, Well what's a GA? It's a way of searching a space of solutions, good old heuristic search (still amuses me that AI was called that at one point). But to describe that space you have to describe both the items in it (which I am still somewhat unclear of) and the neighbours of each item12:01
vi1985andrei, do you mean the other specimen of "hypotheses" in each generation?12:03
vi1985* sorry, meant to say hypotheseis-specimen12:03
andreivi1985, For example in Z^2, the points are ZxZ (tuples of integers) and neighbours are defined as being a neighbour of both of the two numbers (<= 1 to each)12:03
andreivi1985, Hmm.. I'll phrase it differently. Give me a point in your 4D space and tell me what it means (for some simple hypothetical game)12:04
vi1985andrei, a point is an action z, in action class y, on planet or fleet w, in future move x. The whole 4d space is nothing but an encoding of sets of actions for future moves; this is why i call it a Hypothesis. it is not the place where you search for solutions.12:05
andreivi1985, More detailed, an actual point12:06
andrei(I'm actually goinging somewhere with this by the way :)12:06
vi1985andrei, an actual point is a single action, which is a part of some game "hypothesis". i thought it was clear?12:06
andreivi1985, Yes, how do you encode that?12:07
andreivi1985, I mean.. they get to be strings of integers somehow12:07
andreivi1985, Give me an example of one of these strings12:07
andreivi1985, And what it would mean in a hypothetical game12:07
vi1985andrei, yes, unique encoding by integers. that's the fastest and easiest12:07
vi1985andrei, why is it an issue?12:08
andreivi1985, Yes, so.. give me an example of one :)12:08
vi1985andrei, ok, A = <7, 14, 53, 144>  ?? how does that help?12:08
andreivi1985, Because the proposal omits the most important part of a GA. Which is the encoding of points in the space and their neighbourhoods12:08
andreivi1985, So.. how do you interpret those numbers?12:08
andreivi1985,  As in.. 7 is what.. 14 would be what, etc?12:09
andreivi1985, (and no, I'm not doing this to be an ass :) sorry again if it seems that way. This really is the critical thing for a GA and the first thing one must ask)12:10
vi1985andrei, on future move 7, on planet or fleet 14 (i could make the fleets a separate dimension, but then again i decided to put them after the max. planet), in action class 53 (say, colonize. it's not a necessary dimension, but makes it easier for other purposes), the specific action 144, which is "colonize Terra". Was that not clear in my previous wording?12:11
andreivi1985, Ok (it was somewhat clear, but that's my issue) are you familiar with the mutation operators for GAs?12:13
vi1985andrei, didn't I specifically say that several (randomly selected) z-values are going to mutate (I chose a simple way of doing it; instead of cross-breeding alleles, I'll simply do single-organism breeding for the best fitting ones.)12:15
vi1985andrei, what is the issue with that?12:15
vi1985*sorry, not cross-breeding alleles... but crossing alleles between specimen12:16
andreivi1985, crossover and mutation are two different things12:16
vi1985andrei, ok, I inferred incorrectly. Where are you going with the argumen?12:17
*** mithro has quit IRC12:19
vi1985andrei, are you still here?12:19
vi1985andrei, because I have prepared some questions for you. it's only fair, if we're doing this12:20
andreivi1985, Sorry, I'm writing my talk while doing this :)12:20
vi1985mhh12:21
andreivi1985, Yes, feel free to ask anything at any point :) I welcome questions from anyone any time12:21
andreivi1985, One of the very important aspects (and one of the hard bits) of GAs is to have meaningful operations on these strings. The issue I have is that your operators aren't meaningful. For example, a slight mutation in one particular part of that 4D representation has no meaning in terms of the tactics within the game12:21
*** Karol_ has joined #tp12:22
*** Karol_ has left #tp12:22
andreiIf I modify the 3rd integer by 1, that does not represent a slightly different solution. It represents an entirely different one12:22
andreiWhatever fitness function you pick with this representation will not be smooth12:22
andrei(and since GAs are basically following the gradient that's really important)12:23
andreiI just want to make one thing really clear. Don't feel you have to ask me questions because I'm asking them. Ask anything you want any time you want :) I'm glad to answer questions12:24
andrei(so, please, do ask :))12:24
vi1985andrei, first of all, this can be layered further. secondly, choosing which planet to invade with how many troops will only slightly alter the situation when you have tens or hundreds. Then, if the algorithm you're suggesting is only good for narroeing down action-paths for one local maximum, then it's inherently bad for a game, where many different alternatives can be considered.12:25
andreivi1985, I haven't suggested any algorithm :) (nor do I intend to)12:26
andreivi1985, The issue is with actions, not with amounts12:26
andreivi1985, You have an integer representing an action. There are no meaningful operations on this12:27
andreivi1985, Any change will give you a radically different action12:27
andrei(that 53 in your original example)12:27
vi1985andrei, but again, a difference of even 5 actions, "radically different actions", is something I intend to do, not something I intend to avoid, when searching for a solution.12:27
vi1985andrei, and this is why I want 3 types of populations for each seed: a conservative, normal mutation rate, and a risly one for newer solutions12:28
vi1985andrei, does that clarify things for you?12:29
andreivi1985, But the GA requires a somewhat smooth fitness function. You can't write a smooth fitness function for that encoding. Any change in the action axis will give such radically different results (by changing the meaning of the amount axis) that hereustic search won't work12:29
andreivi1985, Let me make it concrete12:30
andreivi1985, That might hepl12:30
andreivi1985, erm, help12:30
vi1985andrei, oh, it's not going to be heuristic search... I'm just going to _dumbly_ look at the outcome of a game simulation, and see how it compares to others.12:31
andreivi1985, GA is hereustic search12:31
andrei A = <7, 14, 53, 144>; so.. <turn, target, action, amount> So let's ignore the turn and the target (and say they're optimal)12:31
vi1985andrei, that may well be, but I'm not going to hard-code them. instead, I'm going to leave it an open question for the evolutionary process to decide.12:32
vi1985andrei, I think I made that clear in the proposal?12:32
andreiNow.. we have <action, amount>; say action 1 is "Build scouts", and action 2 is "Research doomsday weapon"; I think it's far to assume that the amount will refer to number of units, and number of turns for research to be done?12:33
andreivi1985, It's ok, let's try out this example (it will clear up this issue if we follow it through)12:34
vi1985andrei, ... that's not the way to decipher that specific encoding. you cannot mix build scouts, with research doomsday weapon, in the same action-category12:34
andreivi1985, But you said the representation was <some turn, some target, some action, some amount>; <0,0,1,0> and <0,0,2,0> aren't both valid?12:34
vi1985andrei, no. it's <some turn, on some of -your- planets or fleet, on some action>  btw, I can easily have another dimension for "amount", if it's valid for the action.12:36
vi1985sorry12:36
andreivi1985, Oh, I'm sorry; that's just the example you gave above :)12:37
andreivi1985, Where does the 4th dimension come in?12:37
vi1985andrei, my mistake i forgot it now12:37
vi1985one sec12:37
andreivi1985, Sure :)12:37
vi1985<some turn, on some of -your- planets or fleet, some action category, some action>12:38
vi1985better12:38
vi1985andrei, I believe these are already implementation details you're wondering about, not general design?12:38
andreivi1985, Ok, so, what are your action categories?12:38
andreivi1985, They are not unfortunately12:39
andreivi1985, A genetic algorithm searches a space of solutions by looking at the gradient and heading that way (with a bit of slack since they get to mutate). This means that following that curve (the gradient) must be meaningful12:40
andreiWhich is a large part of what makes GA algorithms so hard to get right12:40
vi1985andrei, off the top of the bat i can think of a few: research, attack planet/colonize, attack fleet, etc...12:40
andrei(and why there are many conferences on GA algorithms decades after their introduction)12:40
andreivi1985, Ok, I choose.. research and planet attack12:40
vi1985andrei, did you look at the code sample i posted?12:40
andreivi1985, Code samples? No, I have not12:41
vi1985andrei, it's very basic, but it shows very nicely that the basic idea here _works_12:41
vi1985andrei, check it out, there's a link at the bottom of the wiki ;)12:42
andreivi1985, Ah, ok. That's different, and that works because of what I am saying is missing here :)12:42
andreivi1985, But let's go on12:42
andreivi1985, So.. I chose.. research and planet attack (R, A henceforth)12:43
vi1985andrei, what do you mean?12:43
andreivi1985, <0,0,R,?> <0,0,A,?>12:43
andreivi1985, Can you give me two examples that would fit into the question marks12:43
andreivi1985, Any two?12:43
andreivi1985, It'll be clear in a few moments after this example12:43
vi1985andrei, i was referring to the previous statement.. but nm. Ok, say 1 and 2?12:43
andreivi1985, I mean.. interpretations for them (like.. different types of attack, etc?)12:44
vi1985andrei, btw, these 1 and 2 are only meaningful in the context of the 3-rd dimension... sorry if it wasn't clear12:44
andreivi1985, Yeah, that's the problem :)12:44
andreivi1985, But it's ok. I'll make it clear why12:44
vi1985andrei, go ahead12:44
andreivi1985, What would <0,0,R,1> mean and <0,0,A,1> mean?12:45
andrei(sorry for this. I like the Socratic method and aside from explainig it the gradient way I think this is the simplest)12:45
andreiBy the way, the link to your code example is broken :) (but I'm assuming you're maximizing a polynomial)12:46
vi1985andrei, again, R-1 and A-1 would have _different_ meaning. Actually, R would only have one action: "research next". Let's say that R-1 is research next. A-1, let's say it's "send a fleet to Alpha Centauri.12:46
vi1985andrei, is that better?12:46
andreivi1985, Ok, that's fine. (I understand they have different meaning)12:46
andreivi1985, So.. <0,0,R,1> means "More research" and <0,0,A,1> means "Attack Planet X" but... they're next to eachoter (since I said R was 1 and A was 2)12:47
vi1985andrei, cool. was that the core of the issue? Thanks a lot for pointing it out... I'll include it in the description on the wiki12:47
andreiNo :) that's not the issue, but I'm getting there right now12:48
vi1985andrei, yes, but a Hypothesis is _not_ a search space...12:48
andreivi1985, So.. if you pick a point and look at its neighbours of that point (let's say you have a perfect fitness function)12:48
vi1985andrei, ok, go ahead12:48
vi1985andrei, it would be meaningless, as you said before, but that is not a problem...12:48
andreivi1985, A hypothesis is not a search space, the space of all of them is though12:48
vi1985andrei, ok, yes12:49
andreivi1985, No. That's a huge problem. It means that even if your entire algorithm is perfect. Your fitness function is absolutely perfect. The GA will follow the gradient12:49
andreivi1985, But you're saying the gradient has no meaning12:49
andreivi1985, Let me say that in a simpler way12:50
vi1985andrei, ? I thought you meant its neighbours in the same hypothesis12:50
vi1985let me finish12:50
andreivi1985, Sure, go on :)12:51
vi1985andrei, the fitness function will will be there to evaluate results of simulations, _not_ to compare the nitty-gritty of each hypothesis to its neighbours. This is way simpler, and more elegant. The selection process happens "laisez fair".12:52
andreivi1985, Yes, that's fine12:52
andreivi1985, A simulation is a way of saying "What will happen in the future if I do this"12:52
vi1985andrei, yes12:52
andreivi1985, Let's from now on assume that your fitness function is absolutely perfect12:52
andreivi1985, Doesn't matter how it's implemented12:52
vi1985andrei, by fitness function we both mean evaluating a universe-snapshot?12:53
vi1985andrei, to see how well the player fares in the game?12:53
andreivi1985, I mean. A function that takes a hypothesis and tells me how good it is so that the GA knows how to rank them12:53
vi1985andrei, yes, so we're on the same page12:53
vi1985andrei, as we were 10 minutes ago :)12:54
andreivi1985, Ok. So now.. you have these two points. <R,1> and <A,1> (omitting the leading 0,0)12:54
vi1985ok12:54
andreivi1985, Now.. do you agree that all a GA does is follow a curve (and then on occasion poke outside to try to find better solutions?)12:54
vi1985andrei, that would be an emergent property, yes. And the "3 population" design would constantly make it look for new solutions.12:55
andreivi1985, No.. it's not an emergent property12:55
andreivi1985, A GA is designed to follow the gradient of a curve and look for minima and maxima (and add some randomness to try to escape local ones)12:56
vi1985andrei, all I'm hard-coding is natural selection... the optimization process would, in fact, be an emergent property12:57
andreivi1985, Ok, perhaps this is the issue12:57
vi1985andrei, ok?12:57
vi1985andrei, it seems you're poking around for the issue...12:57
andreivi1985, No.. I know the issue. Your search space is ill defined for GAs because it can't be smooth and your gradient is meaningless in the action dimension12:58
andreivi1985, The fact that a GA uses evolutionary terminology has no bearing upon what it does or what it was designed to do. It is very simply an optimization method for functions.12:58
andreivi1985, It is essentially equivalent to following the curve of a function and sometimes probing distant parts hoping for a better solution12:58
andreivi1985, But in your search space that curve has no meaning12:59
vi1985andrei, no, the gradient _will_ in fact be smooth, since changing an action would only slightly impact the overall result in a big game...12:59
andreivi1985, Because putting research next to attack, even with a perfect fitness function, doesn't mean anything. They're simply two different solutions12:59
vi1985andrei, it is not _meant_ to mean anything... the whole n-D space is just a way to uniquely encode actions.13:00
andreivi1985, Not at all. If I decide to not attack my enemy this turn and instead to research; that has a huge impact on the game13:00
andreivi1985, But that's by far the most important part. The part you're dismissing as just encoding is what entire GA conferences get to talk about :)13:01
andreivi1985, The reason GAs work is because a small change in a solution brings you a little bit closer to a better solution (or worse) and you can be somewhat sure that if that change wasn't good there won't be an incredibly good solution right next to it13:02
vi1985andrei, it will _definitely_ take some effort to make individuals mutate _smoothly_... this is one of the main challenges. The curve will not be as unpredictable as you make it to be, since a single action is not that important.13:03
andreivi1985, Your search space doesn't have this property :) (and if you want, you can prove this to yourself by drawing part of it out)13:03
vi1985andrei, no, a single action will not have that much of an impact on the overall outcome13:03
vi1985andrei, plus, I will have a conservative population, and on top of that, preserve solutions between generations and turns...13:04
andreivi1985, How you write the GA doesn't matter in this case13:04
andreivi1985, Assume it's the perfect GA for this problem13:04
vi1985andrei, ok, assumed.13:05
andreivi1985, The issue is with the topology of the search space. Let me give you another example of the search space. Maybe this will make this clearer: <attack planet, Planet X> (<1,1>) <attack unit, unit Y> (<0,1>) <attack unit, unit Z> (<0,0>)13:06
andreiow.. unit Y can be a horrible unit to attack, so bad indeed that you would be throwing away your unit doing it (or it can be far away13:06
andrei)13:06
andreiBut attacking planet X can be good because you can take it. But unit Z is threatening your capital and unless you kill it you're a goner13:06
andreiHere are 3 points in your search space, they're next to eachoter. The gradient beween them doesn't actually mean anything13:07
andreiFor a GA to work what has to have meaning is the gradient, that's all :)13:07
andreiEverything else is just a series of small details13:07
vi1985andrei, natural selection will take care of that thinking process, since it's not considering it move-by-move, but as a _whole_ sequence.13:08
andreivi1985, But it doesn't matter if you consider it as a whole sequence or move by move. Please stop calling it natural selection, it's a mathematical optimization algorithm (natural selection implies much more than following a curve)13:08
andreivi1985, The issue is that it follows a curve, that's it (mostly) and your curve doesn't encode information13:09
andreivi1985, It's what everyone new to GAs does; you just have to figure out a way to make the changes in your search space meaningful. Not the search space itself13:09
vi1985andrei, your objection is simply void, since you did not get this issue. it is natural selection, since if you look at the design documents, you will see that this is _really_ what I'm hard-coding. Maybe we have a terminology issue?13:10
andreivi1985, You're writing a genetic algorithm to solve something. Natural selection is a biological process through which species evolve in the real world. Genetic Algorithms are a branch of mathematics which optimize functions13:10
andreivi1985, They are entirely separate13:10
andreivi1985, And have very little to do with eachoter13:11
andreivi1985, (aside from the horrible choice in terminology which leads to confusion)13:12
vi1985andrei, natural selection is a process by which fitter individuals survive to reproduce. Perhaps it would be better to simply call it an "evolutionary algorithm" instead; I wasn't aware I was abusing notation.13:13
vi1985*terminology, not notation13:13
andreivi1985, It is not an abuse of terminology. Natural section cannot be programmed. It is a biological term for a process which happens in the real world. It has nothing in the least to do with computers13:14
andreivi1985, Evolutionary algorithms are a discipline of mathematics (and CS) and Genetic Algorithms (which is what you propose) are part of that13:14
vi1985andrei, what about various artificial-life simulators?13:14
vi1985andrei, this is -exactly- what happens there... as far as I know, they are within the set of GA solutions.13:15
andreivi1985, Those are different, they do no rely on evolutionary algorithms (nor do they encode information the same way generally). They rely on the theory that was originally developed for cellular automata13:15
vi1985andrei, not always; most of the better ones today aren't cellular anymore, and in fact they have hard-coded neural-net intelligence evolution, and phenotype evolution...13:16
andreivi1985, That is entirely different from this. Genetic Algorithms do one specific thing. That's all. They follow that curve and sometimes poke around13:17
vi1985andrei, natural selection can be easily coded... it's just an issue of how good it is, relative to the task13:17
andreivi1985, Natural selection is not an algorithm. It is a set of biological principles13:18
andreivi1985, It has nothing at all to do with CS. Only mathematical optimization matters in algorithms13:19
vi1985andrei, if you're right, it was a misnomer to call my design that. For all that I've read, I wasn't aware that this is the case. But even in that case, it does nothing to detract from the merits of the solution.13:19
andreivi1985, You're implementing a genetic algorithm to solve a particular problem13:19
andreivi1985, No?13:19
vi1985andrei, natural selection is an algorithm. Biologists are still trying to figure out the fitness function, and the genotype. Now, for all the research that I've done, I became more and more convinced that what I'm doing is in fact GA. I do not see enough evidence at the moment to rename it, but thanks to our discussion i'll dig around again, to see if there are any inconsistencies.13:22
andreivi1985, Wait13:22
vi1985andrei, your criticism, as far as i can see, was trying to map my design to some preconcieved notion youhave13:22
andreivi1985, The algorithm you describe. Whatever you call it, is a genetic algorithm13:22
vi1985andrei, i'd like to think so. do you?13:23
andreivi1985, It's not a matter of thinking it :) That's what it is if you read through it. Now.. GAs have a large body of mathematics and a few decades of work (I think about 3 at this point) on them13:23
andreivi1985, And that tells us that GAs are essentially gradient-descent algorithms, which means what matters is the gradient of the search space. I'm not sure how I can say it :) Your algorithm is of a class that's been explored and doesn't fit well with your search space13:24
andreivi1985, I can show you appropriate books, or literature that can explain this more formally. (or I can do so, the math is easy)13:25
andreivi1985, Or point you to a GA tutorial that's written by a real researcher in the field explaining this. (Koza if I remember right has some, well anyway, his book was good)13:26
vi1985andrei, I haven't studied it formally, if that's what you mean to highlight. In fact, it's written plainly in my proposal. Your criticism, again, refers to your failure to map my design with a template.13:26
andreivi1985, No.. not at all13:27
vi1985andrei, what gives me the confidence in the design, is the code that I have already written, which successfully simulates natural selection, and which works, and works great.13:27
andreivi1985, My issue is that your algorithm happens to be part of a widely studied mathematical field and that 30 years of work on that have shown that it's a gradient descent algorithm and that the curve matters. Not the space.13:27
andreivi1985, If you're referring to your example code. (which I'll assume was maximizing a polynomial) of course it works :) The gradient has meaning there13:28
vi1985andrei, the curve definitely matters. I have said it to you many times here; you claim that the curve will not be smooth, and I claim it will be smooth enough.13:29
vi1985andrei, the code example isn't about maximizing a polynomial. download it and see for yourself13:29
vi1985andrei, if we agree to disagree, i have a few questions for you13:30
andreivi1985, But.. there's no smooth enough. It's the fact that the gradient must have meaning. It must be meaningful to want to head in a certain direction13:30
andreivi1985, There's a clear mathematical definition for this13:30
andreivi1985, And your search space doesn't satisfy it :)13:30
andreivi1985, I'm not saying it can't work. I'm saying that you should look more closely at all the incredible amount of research that has been done on GAs because I have (I've read quite a few books and papers and plan to use some in my PhD research) and you're missing a key component13:31
andreivi1985, (well, it can wor if you change your search space)13:31
vi1985andrei, if it's reasonable to approximate it with a smooth graph, then it's smooth enough, and has meaning. My working hypothesis is that from the hundreds of actions taken each turn, changing 4 or 5 of them will not radically change the situation.13:32
andreivi1985, It is not smoothness that matters. It is that the change must mean something13:32
andreivi1985, Smoothness is a result of that change being meaningful13:33
andreivi1985, Not the other way around13:33
vi1985andrei, in natural selection it's exactly the other way around. And it has been successfully implemented in artificial-life simulation.13:34
andreivi1985, This is not natural selection. It is a mathematical optimization technique13:34
andreivi1985, That happens to have chosen the same terminology13:34
vi1985andrei, perhaps, again, it's a misnomer on my side, and I can call it something else to avoid possible confusion.13:35
andreivi1985, But it doesn't matter what you call it. That's the algorithm you're actually implementing :)13:36
andreivi1985, But it's ok. I can send you something to read on the subject that will hopefully shed some light on optimization and this class of algorithm. If you want13:36
andreivi1985, Sorry. We've been doing this for 1.5 hours and.. I have work to do :) Did you want to ask me any questions?13:36
vi1985andrei, all I'm hard-coding is a natural selection process. If it does not meet the rigorous definition for GA, then it's not consequential at all if I rename it.13:37
andreiAlso, again. Feel free to ask anything, at any time at all :)13:37
vi1985andrei, do you have 15 mins?13:37
andreivi1985, You are not implementing natural section (if you visit a biology professor and tell them that they might scream at you). It meets the exact definition of a GA. And that just happens to tell us that the search space you are describing doesn't work :)13:38
andreivi1985, Sure :)13:38
vi1985andrei, I am implementing natural selection, as do other other algorithms (e.g. artificial life simulators)... but we agree to disagree.13:38
vi1985ok13:39
vi1985andrei, so you seem to want to use alpha-beta pruning for your game-trees?13:39
andreivi1985, Only in the endgame when there are only 3-4 moves left13:39
andreivi1985, And the game is trivial13:40
vi1985andrei, and there is only one opponent left?13:40
vi1985andrei, what if the endgame is with 3 opponents?13:40
andreivi1985, (so I only have say 1 planet, 1 ship and can only see a few enemy ships around)13:40
vi1985andrei, so how do you define "endgame"?13:41
andreivi1985, Well, if it's trivial (so if the tree has some low branching factor) it'll kick in alpha-beta pruning, otherwise it won't13:41
andreivi1985, Say.. the game tree has a branching factor of 1013:41
andreivi1985, (which I randomly chose right now :P)13:41
andreivi1985, You can set whatever value you want, it depends on how good you want the AI to be in holding out right at the bitter end and how long you want that to take.13:42
vi1985andrei, ok, but will your AI "kick it in" even if there are multiple opponents left?13:42
andreivi1985, If the game tree has a branching factor less than a particular set value13:42
andreivi1985, Which means, if there are fewer than that many different things to look at in the game tree at every step13:43
andreivi1985, It's a much more robust definition than looking at how many oponents there are13:43
vi1985andrei, ok. first of all, how does your alpha-beta decision tree going to account for multiple opponents, when, by def, it is built for a game of 2?13:43
andreivi1985, And the number of opponents doesn't say anything. One opponent can have 20 ships right next to me which can make the game tree way more complicated13:43
andreivi1985, It's the endgame. If this ever comes around the AI is already dead. It'll just break down everything into itself and opponents13:45
andreivi1985, Ignoring allies (doesn't matter, it's dead in a few moves because it has so few units)13:45
vi1985andrei, so you want to go by the Paranoid Hypothesis?13:45
*** jphr has joined #tp13:46
vi1985andrei, since this is, in 99% of the cases, the wrong heuristic to use13:46
vi1985andrei, as it is highly unlikely all players have ganged up on you.13:47
andreivi1985, Paranoid hypothesis? You mean. Will it assume while doing alpha-beta pruning that things can go as wrong as possible?13:47
andreivi1985, By the way, paranoid hypothesis is not a term used in the literature13:47
vi1985andrei, it will cluster all other opponents as one, and treat them as if they're ganging up on you.13:48
vi1985this is the only way to keep it true to def of alpha-beta13:48
andreivi1985, Sure. It doesn't matter. The only time this applies is as at the end as the AI is only 2-3-4 moves away from death13:48
andreivi1985, And it has almost no units, and almost no planets left13:49
andreivi1985, (even having 2 units means your branching factor goes to  18+ it's irrelevant)13:49
vi1985andrei, so you're dooming your AI from the start? Suppose your other strategy is successful at bringing it close to endgame, and then it switches to a completely misguided heuristic, and loses? or reverts back to a worse situation, at which point the other module takes over?13:50
andreivi1985, The start? This only applies when the AI has under 1 unit (and can't build anything) or no units and can maybe build something.13:51
andreivi1985, It no longer matters at that point13:51
andreivi1985, Anyway, the alpha-beta pruning was put in there to give me a way to test out the algorithms in a controlled way (not in a real AI). And it just seems reasonable to use it right at the end.13:52
andreivi1985, The rationale is when you have no units, being paranoid is the only way to surive (since anyone that notices you will kill you off) (or when you have just 1 and don't have the ability to build more)13:53
vi1985andrei, this seems like redundant overhead; sure, if you want to use it as scaffolding in your design process, then it can do ok for testing that everything else works. But why even bother throwing it in, if you supposedly have a superior algorithm?13:54
vi1985andrei, I remember at one point on IRC you said it will be more central than you're claiming now. have you made considerable changes to design?13:55
andreivi1985, The alpha beta pruning? I don't think I said that. No the design has been the same since the start :}13:55
andreivi1985, You're welcome to see my git logs :) (I keep all my work in a large git repository)13:56
vi1985andrei, you actually said something to that effect, but I won't insist. Besides, it's very hard to tell from your design document just how central you want it to be.13:57
vi1985andrei, I also had a question about your strategy modules13:57
vi1985andrei, you mention an ability of one module to generically call another. So, if your highest strategy module decides on a course of action, just how will it know the actual cost involved, if there is only top-down communication?13:59
vi1985let me elaborate13:59
andreivi1985, If you claim I said it. I'd like to see the quote :) (saying you won't insit is the equivalent of saying that I'm dishonest and that you're right but it's not worth pursuing). The only time I mentioned alpha beta pruning I said that I want to use that and an inference engine and explore the tree using the strategy modules.14:00
andreivi1985, But go on :)14:00
vi1985andrei, no, it's not. it's equivalent to saying that I remember something to that effect, but I won't hold you responsible, since I may have not understood it correctly.14:01
andreivi1985, Okies :) go on (I looked at the logs, I only mentioned alpha beta pruning once on IRC aside from today and that was in the context above)14:02
vi1985andrei, for example the highest-order strategy module decides to take over sector A and not B. It sends orders to modules underneath, to get resources, to prepare starfleets etc. But until this process trickles down to the basic actions, it may well be much costlier than taking over sector B. The point is that your top-level modules will have no way of knowing the cost.14:03
andreivi1985, Sorry, I just don't like it when people say I've made a mistake and don't explain what it is. It's a pet peeve :P But one that works really well with math and CS (don't mean to imply anything about you)14:03
vi1985andrei, and if you plan on bottom-up inter-module communication, how are you going to prevent it from being brute-force?14:04
vi1985andrei, I was just explaining my rationalle for the emphasis on alpha-beta trees in your design14:05
vi1985andrei, because it seems to me, that this can lead to sluggish and bizzare behaviour.14:05
andreivi1985, In your example that module wouldn't pick A or B. It would propose either A or B and other modules are applied to (plan A) and (plan B) until one of them seems better (or it stops looking ahead).14:06
vi1985andrei, ok, that's one way of finding out the cost. But how are you planning on not making it brute force search through an unknown tree-depth?14:07
vi1985andrei, because if you're going to decide at bottom level, a) it may take a while when you reach it, and b) you have no idea if and when you reach it at all14:08
andreivi1985, Buttom level? There's no bottom level. The seach stops through cutoffs14:08
* danderson glazes over at the uber intricate debate on AI techniques14:09
vi1985as far as I understand, each module is responsible for controlling other modules in a heirarchical manner, until that heirarchy stops at the action-level14:09
andreivi1985, The same way that chess engines work, for example. They have cutoffs. Hereustics for weighing parts of the tree that are so far unexpanded. And they focus on what is theresting.14:09
vi1985andrei, right, but you will never know the actual cost until you have analyzed the leaves of that tree, since unlike chess, your tree is a heirarchy of orders.14:11
andreivi1985, Not at all. Modules are not a hierarchy. They're actually a graph. There's a tree that gets expanded as modules wish it to be. A few modules are deemed the initial ones and they first expand which part of the tree is interesting.14:11
andreivi1985, Hmm.. modules are just hereustics which get to expand the tree, and when they get to a bit of the tree they're not designed to handle they know how to give it next to14:12
andreivi1985, It's just like chess :) (except their weight functions get to be a bit more global, and a lot more complex)14:12
Ohmthere are some pretty damn good dissertations on this kind of stuff14:12
Ohmsearch around for othello AI:s, their structure and logic is really important, while not being especially bogged down by rules. Make good case study14:13
andreivi1985, An 'order' is just a module deciding that a path through the game tree is interesting and expanding it :)14:13
OhmBut it sounds like you know all about it14:13
andreiOhm, I haven't looked at othello AIs (mostly because I don't know how to play). I have read some papers on chess/go/poker AIs14:13
OhmHmm, I believe developing an AI for go and for othello would be extremely similar14:14
andreiOhm, Really? It's that much of a pain? ouch..14:14
Ohmjust remove the fact that stones flip other stoned and expand the board a lot, and you've made othello into go14:14
Ohmoh, and change score count rules a bit - area instead of stones14:14
andreiOhm, Funky. Maybe I'll learn how to play it then. I have difficulty finding go players here14:15
Ohmothello is a very simple game, and takes VERY little effort to learn14:15
Ohmlike 5 minutes tops14:15
andreiOhm, Awesome. Thanks. I'll take a look later tonight :)14:15
vi1985andrei, frankly, I'm not sure how to understand this graph --> tree mapping. At base, you will need a tree to look at your options. So perhaps you're saying the modules will create branches of this tree, and hand it over to other modules if they're stuck. but you will again never know the cost until you reach the leaves.14:15
andreivi1985, Is there anything else you wanted to ask? :)14:15
Ohmyou usually have a small amount of possible moves, so it's good on memory aswell :)14:15
andreiOh there we go14:16
vi1985brb14:16
andreivi1985, There exists a game tree. And I can apply a predicate 'expand-action' on any node. Given some action, I can expand that node to see it's consequences. I choose a few modules and say "I want an answer so what to do next". Those modules each find a different branch of the tree interesting, they go and expand it. Then they rearch a point where their hereustics tell them to expand with a different module, and they do that. The tree is someth14:18
andreiing that gets used by them, there's no mapping between modules and tree.14:18
andreiOhm, Do they have the whole stone handicap thing, like in go?14:18
andreivi1985, That's why I say the central part of the algorithm isn't the tree. It's the modules. It's just like you would wirte a normal AI. But instead of hardcoding the rules, they're encoded in the game tree.14:20
andreivi1985, (I shoud clarify, I say that in the proposal)14:21
Ohmandrei: no, no handicaps in othello14:21
OhmI think14:21
Ohmand if there are, they consist of points, not stones14:21
andreiOhm, That's my favourite thing about go. Two people can always play regardless of their relative skill levels.14:22
vi1985sorry, phone call14:23
Ohmandrei: sorry, gotta go14:23
Ohmbut yeah, I also played go a bit14:23
Ohmfew years ago14:23
Ohmanyways bbl14:23
andreiOhm, See ya :)14:24
vi1985andrei, right, but still at base you're exploring the game-tree using those modules. How can any module decide on an "interesting" direction, if it has no way of actually knowing the cost of those actions? And if, as I said earlier, a module decides to explore A and not B, it will have no idea it will be cheaper than the alternative, unless it actually traverses to the leaves (with help of other modules, perhaps) and f14:26
andreivi1985, It does have cost functions. I never said modules were devoid of any logic. Just that the rules are abstracted out14:27
*** brennan_ has joined #tp14:28
vi1985andrei, but this is the whole point of having such a generic design. If you're going to give them heuristics to go by, why not discard it altogether, and simply have heuristical micromanagement? is this what your design boils down to?14:28
andreivi1985, Traversing to the leaves of a game tree means exploring a path until the end of a game14:28
andreivi1985, That is actually an infinite path in #tp14:28
vi1985andrei, no, that is _definitely_ not what I mean. I mean peeling layers of abstraction, until you reach specific actions,14:29
*** ryan__ has quit IRC14:29
vi1985.14:29
andreivi1985, There are no levels of abstraction. The game tree is composed of actions14:29
vi1985andrei, didn't you say that these modules will be sets of abstracted rules? and that a module can call other modules if necessary? in that case, you get a tree (not necessarily a roted tree), with decreasing levels of abstraction as you go down. A module can have, e.g., 5 leaves, and 6 sub-trees, where it calls other modules.14:32
andreivi1985, You are implying this system can play a game without any background knowledge. Not at all, that would be a very odd claim for anyone to make. There are strategies that modules can use to explore. That's why they're called strategy modules. So a module can see if it is beneficial to do research. This means expanding a node in the tree that chooses to do research and then applying other modules to see if it's viable. The same way any AI wo14:32
andreirks but without hardcoding the rules.14:32
vi1985rks?14:32
andreiworks. IRC cut it off :)14:32
vi1985censurship :)14:33
andreivi1985, (well, I don't know if it's my client or freenode)14:33
andreivi1985, Heh. Yeah. Abstracted rules, like "It's a bad idea to do research when an enemy can reach a weak planet in a few turns, build instead" Those don't imply much about the game rules14:33
vi1985andrei, ok. Sure, but why make modules whithout hardcoding heuristics into them?14:33
andreivi1985, I never said I won't have weight functions to decide what's likely to happen. Just that they'll be general and rule-independent, the rule-dependent bits are done by the game tree.14:34
andreivi1985, Note that the examples I gave actually had logic in them. One of them decided that a unit had to be moved and called more primitive ones to find paths.14:36
andreivi1985, All I want to do is eliminate the need to hardcode ideas like terrain features in the AI. I don't want to remove the general reasoning about terrain features. That would be quite the claim :)14:37
vi1985andrei, it doesn't matter where your weight assesment functions come from (can be from library, hard coded, etc. It's really not the point). The point is, that you will never know the real weight of an inner-node in that abstractio-layer tree, until you actually see what is the sum-weight of all the individual actions such a route would entail. Thus, you cannot simply weigh it until you really know the sum-weight of th14:37
vi1985andrei, right, and all I'm saying is that you never know the "cost" of a particular strategy, until you have the sum-weight of the leaves trickle up to the node.14:39
vi1985andrei, which is brute search.14:39
andreivi1985, No AI does; all AIs use hereustics to decide when to stop14:39
andreivi1985, Including humans :)14:39
andreivi1985, All I want to do is implement the same rules normal AIs do, but in terms of a game tree to represent possible moves, instead of hardcoding the possible moves in the AI. Your criticism applies to every AI (aside from brute force)14:40
vi1985andrei, but such a generic approach, that says "i'll let a module decide if it wants to use other modules", has the danger of going very deep, and very wide into the abstraction-tree14:40
vi1985andrei, no, it applies to the approach that lets a module choose another modules to take some of the action. this is a very slippery slope to brute force!14:41
vi1985andrei, your design would work perfectly for designing a cooperative AI (as you have mentioned!), but when there is no "smart overseer", it can result in terrible recursion...14:42
*** Epyon has quit IRC14:43
vi1985andrei, and you cannot hard code a smart overseer, and still maintain the generality of your design (which, i assume, it's main feature)14:43
andreivi1985, You're assuming that it would expand nodes randomly. It would not. The same way you decide that to move a unit you only care about A* + the weight being how likely you'll get attacked and then have a likelyhood function, that's the same thing you write here14:44
andreivi1985, Not every module must be entirely ruleset independent. You want as much to be reusable (like A*) as possible, but some things must have game knowledge14:44
andreivi1985, If the AI was ruleset independent entirely that would assume it can find new strategies on its own14:45
andreivi1985, So if you change rules, yeah. You might have to swap some modules in or some modules out, but on the whole most of them can stay because the actual strategies are mostly the same. (and the really bad ones won't be explored. Like if your ruleset doesn't support research, well.. having the research modules in makes no real different aside from a bit of wasted CPU time)14:46
vi1985andrei, i stated before that i understand that your strategy modules would randomly assign tasks to other modules. And i understand that the weight functions can come from external libraries (as well as some other restrictions). This is not what i'm saying. what i'm saying, is that in order for a strategy to even begin to guess its weight, it needs to know when the recursion would terminate; that is, when no more modul14:48
andreivi1985, No. They would not randomly assign tasks, not at all.14:48
vi1985andrei, this is not what i'm saying :)14:49
andreivi1985, You said "i stated before that i understand that your strategy modules would randomly assign tasks to other modules."14:49
vi1985andrei, sorry, typo14:49
vi1985disregard it :)14:49
andreivi1985, There's no need for it to go all the way down. It can guess (as does every AI, implicitly or not). For example, if you were to write a normal AI for a game, you would implicitly make some guesses. In freeciv for example certain types of threats are not considered when deciding what a safe path for a unit is.14:51
vi1985andrei, so basically, as i see it, your design allows for any module to call any other module for assistance. that means, that "inner nodes" will could not even _begin_ to estimate how many modules will be called, and _which_ modules will those be, and decide on an appropriate heuristic.14:52
vi1985andrei, in the case where you can guess which mudules will be called for which actions, you will need gigabytes of logs to come up with appropriate heuristics...14:53
andreivi1985, Call implies a model of execution where modules pass the current instruction pointer over from one to another14:54
*** brennan_ has quit IRC14:55
andreivi1985, The modules are arranged in a graph. The AI system explores parts of the graph as needed (the graph is represents data dependencies between modules). I see no reason to let modules look at neighbours in that graph. A module can expand part of the tree if it wants. There's no reason why a module shouldn't just guess at a weight of a particular node.14:55
andreierm remove that is :p14:56
andreivi1985, Of the reasons why I put them in a graph is so that you can keep track of depth14:56
andreivi1985, And one of the reasons I said modules use the game tree and that's just one tool not the central focus is because of estimates14:57
vi1985andrei, but those instructions must be abstract, since in the opposite case the module could execute them itself. That brings me back to the previous point that either a) a "smart overseer" is needed, or else b) you need to document which heuristic can be applied to which combination of action-heirarchies. In the opposite case, you have no way of even guessing the actual actions it will require to perform something14:58
vi1985andrei: the graph size will then be O(n^(n-1)) at worst case, to accommodate for the combinations.14:59
andreivi1985, No, not at all. Every module can be as abstract or not as the writer wants.15:00
vi1985andrei, right, i said before that it can make several concrete actions, and pass some abstract instructions to others.15:00
andreivi1985, Not at all, the graph will be mostly linear. You're assuming again that modules can call any other modules. Ok. Maybe this will help explain. You write a function that moves a unit for a game. You don't put into that function information about research or the economy. You just want to move a unit.15:01
vi1985andrei, i'm not even looking at worst-case scenario... what i'm saying that in principle it will be very hard for modules to evaluate their actions, not knowing how many other modules it will need to call, and which ones will there be.15:02
andreivi1985, If you can write an AI for a game like civilization and have it evaluate how good an action is, you can do the same here15:02
andreivi1985, And write is as rule independent as you want :)15:03
andreivi1985, erm it as rule independent15:04
vi1985andrei, but there it uses simple heuristic micromanagement, which is not what you're proposing. You're claiming that modules can delegate abstract orders to other modules. The criticism is that in a complex scenario (say, "take over sector C"), most inner modules will have no knowledge of the actual course of action, since they will be higher up in abstraction, and will thusly have no knowledge which modules the next o15:08
Ohmthe less rule dependent the better, as long as it gets finished, right?15:08
andreiOhm, Yeah :) exactly. And because it's in a graph you can do neat things like decide in a ruleset unit moving shouldn't take into account some factor really easily.15:10
vi1985Ohm, working designs are top priority as far as I know. Rule independence, or the infrastructure for it, is nice but secondary.15:10
vi1985andrei, well, we agree to disagree then. let's keep it at that :)15:11
vi1985i need to get to the lab and do some work before wall climbing this evening...15:11
Ohmk15:11
* vi1985 does not like dueling.15:12
andreivi1985, Heh, very well15:12
Ohmi am healing up a muscle inflammation15:12
andreiOhm, Ouch, I hate those15:12
OhmReally want to get back to doing real exercise15:12
vi1985andrei, take care. No hard feelings on this side of the channel :)15:12
Ohmright now there's very little I can do for heavy cardio, unless I spend the huge amount of money to get to a ymca15:13
andreivi1985, Heh, no problemo :) I'll gladly answer more some other day15:13
*** vi1985 has quit IRC15:13
Ohmand according to the medical person I spoke to, I shouldn't be lifting weights either15:13
andreiOhm, Ouch, which muscle?15:13
Ohmconnection between torso and right leg15:16
Ohminner15:16
OhmI don't remember it's name15:16
andreiAh, yeah.. those aren't any fun.15:20
*** Iwanowitch has joined #tp15:21
Ohmvery depressing15:27
SmokingRopeis that something they can treat?15:30
SmokingRopei've bruised my ribs a couple times and they just told me go easy on them for a couple weeks15:31
Ohmanti-inflammatory pills and rest is pretty much it15:38
*** andrei has quit IRC15:53
*** vi1985 has joined #tp16:27
vi1985JLP, i really didn't enjoy the last IRC session... I don't think this kind of thing should be encouraged.16:37
JLPvi1985: well we don't encourage it but discussions like this are nothing unusual in open source16:38
IwanowitchWait, what happened? If I can know?16:39
vi1985JLP, i understand the value of it from your perspective.16:39
JLPvi1985: so as long as it is only talk about the technology and civilized as this one was there should ne nothing wrong16:39
JLPvi1985: but it is always up to each person if he wants to take part ot not16:40
vi1985Iwanowitch, basically someone challenged me to a "poke holes in each other's design" session.16:40
IwanowitchJLP: I updated my proposal on the wiki, btw16:40
JLPIwanowitch: will take a look a bit later16:42
vi1985JLP, i didn't have a problem with his questions... I just thought that the design proposal should speak for itself.16:42
Iwanowitchvi1985: Ah, I see. Well, yeah, it depends on how and why it is done, I suppose... Constructive criticism is good, nasty things should be avoided, makes sense.16:42
IwanowitchJLP: thanks16:42
vi1985ok, I just didn't know it was common practice in open source :)16:43
JLPvi1985: it's one of the experiences of open source development that programs like gsoc try to show new people16:48
JLPvi1985: i know it does feel kinda hard and unpleasant at first, but you get used to it16:48
JLPvi1985: i know when i experienced my first session like this with KDE developers16:49
*** zzorn has joined #tp16:50
vi1985JLP, i understand... especially when there is competition over a position. Me and my friends do it all the time, but of course it would be different here.16:52
Iwanowitchvi1985: Wait until you see code reviews from complete strangers... That feels weird.16:53
IwanowitchI'm not sure TP has much of a code review policy though?16:54
vi1985Iwanowitch, :) I see you've been around the block?16:54
IwanowitchI did soc last year, been meddling a bit since then.16:54
IwanowitchI don't call myself experienced though.16:55
karolvi1985: imho I was one of the most civilised discussion i witnessed16:55
karolsry, it was ...16:56
vi1985karol,  lol :) i'll take your word for it ;)16:56
*** DTRemenak has quit IRC16:56
JLPIwanowitch: yeah we don't have a policy, but there should definitely be more code and perr review16:57
JLPfor the better of the code itself :)16:57
vi1985i guess here you also need to put your mouth where your money is! ;)16:57
IwanowitchJLP: Of course, you need people and time for it.16:57
karolalthough, from mathematics' point of view I have to agree with andrei :-/16:57
vi1985karol, what exactly are you referring to?16:58
karolcategory modeling is kind of dificult16:58
karolvi1985: the GA16:58
* JLP can't agree with anyone since he know nothing about AI, but it was interesting reading it16:59
*** jphr has quit IRC17:00
vi1985karol, I don't think that andrei got to the bottom of it. He was interested to see how an algorithm can compare b/w individual elements of "hypotheses", whereas it is not what it does at all.17:00
IwanowitchMmm. With proper representation, genetic algorithms can be very powerful for most things. But it's often hard to find a good representation and associated mutation/recombination operators.17:01
vi1985Yes, that17:01
vi1985*sorry, will continue17:01
*** zzorn is now known as zzorn_food17:02
* JLP would just take in both AI applications if there were enough slots for GSoC and then we could see the battle of both AIs :)17:03
vi1985Yes, that is true. The thing is that representation and decoding is actually not a big issue here; what I was surprised he didn't mention, is the difficulty in simulating it. (which will be the hardest part in coding it)17:04
karolvi1985: I am looking on it as mathematical model, I learned that categories are better expressed by another structure or better there should be used data compresion algorithms or you should look for corelation ... -> it's rather from my experience with logistic regression and neural networks ...17:04
JLPdamn i can already see it will be even harder this year to choose only a few applications because of limited number of slots17:04
karolJLP: bye bye persistnece module :(((17:05
vi1985karol, it all really depends, as Iwanowitch said, on the selection mechanism for passing to the next generation. He (andrei) maintained that I need to compare segments of "hypotheses", while I don't think it's necessary at all. But I don't want to discuss it for fear of putting words in his mouth!17:06
JLPkarol: i wouldn't say that just yet, the server could use that a lot, since it is still not very stable17:06
* karol should'n typewhile eating17:07
* vi1985 should start concentrating on getting his AVL trees balanced... or else he will get a C17:08
karolJLP: btw. when does llnz usually arrive here? :)17:09
JLPkarol: uf hard to say, its 10:13 at his place now17:09
karolJLP: pm?17:10
JLPkarol: am17:11
karolhmm, 12hours before me :)17:12
JLPkarol: yeah same here, he's in new zeland, mithro has 7:47 on his clock and is somwhere in australia17:13
karolJLP: you're from slovenia, am I right?17:14
* danderson wonders how many clocks are needed to keep track of all TP developers :)17:14
JLPi guess they should be here any hour now, if there's no sun outside or if they don't sleep too long :)17:14
JLPkarol: yup, slovenia17:14
JLPdanderson: one is enough, you just need to know all the offsets :)17:16
dandersonheh17:16
karolJLP: never heard your language, but I would love to compare it to slovak, my mothertongue17:16
karollive of course :)17:17
JLPkarol: hehe, well it looks like if we talk to czechs we understand each uther almost fine, if it is any similar17:17
JLPkarol: sometimes it is easier then using english :)17:18
karolJLP: czech is quite similar (I study in Prague), polish is a bit harder to understand (my home is near the border)17:19
karolJLP: it would be quite a talk to summon people from 4 countries :)17:20
JLPkarol: yeah, it should be fun17:20
JLPkarol: i only know that while otrok in our language means baby/kid it means slave in czech17:21
JLPkarol: so it is quite funny to see cars on the road that say slave on board :)17:21
karolkarol: in slovak too, our two languages are very similar17:22
karollol17:22
karolI write to myself ;)17:22
* JLP is to tired, goes sleeping17:51
JLPgood night all17:51
vi1985take care17:51
IwanowitchNight.17:51
karolgood night all18:06
*** karol has quit IRC18:07
*** andrei has joined #tp18:51
*** vi1985 has quit IRC18:52
*** greywhind has joined #tp19:00
*** zzorn_food is now known as zzorn19:18
*** greywhind_ has joined #tp19:20
*** greywhind has quit IRC19:20
*** ezod has quit IRC19:56
*** xdotx has joined #tp20:01
*** bddebian has joined #tp20:24
bddebianHowdy20:26
*** zzorn is now known as zzorn_zzz20:46
SmokingRopehi bddebian20:47
bddebianHello SmokingRope20:52
*** Iwanowitch has quit IRC21:17
*** greywhind has joined #tp21:20
*** greywhind_ has quit IRC21:20
*** Lukstr has joined #tp21:22
*** mithro has joined #tp21:48
mithromorning people22:06
mithroxdotx: ping?22:06
* xdotx hides22:06
xdotxmithro: hate to say it but i don't think i'm going to be able to do it today.22:07
mithro:(22:07
mithroany particular reason?22:07
xdotxwas up late last night working on my graphics project and i'm still not done - it's due tomorrow22:08
xdotxalso i'm pretty sure we won't get very far testing because of some known issues with the new object view system22:08
xdotxi think llnz knows more about what's probably going wrong than me. i've got some clues, but i've concluded it's a non-trivial solution.. for me at least22:09
xdotxi'm thinking it's got something to do with object views not being created for objects you don't own, and thus those objects appear not to be updated22:10
xdotxer, to clarify it's with how RFTS is using the object view system, not the system itself22:10
xdotxi think that one bug is producing a number of different strange problems though22:12
SmokingRopeis there a known issue with colonization?22:16
SmokingRopeit seems like every time i try to colonize the server crashes22:16
xdotxx.x22:16
SmokingRopei've reproduced a number of times now22:16
SmokingRopes/d a/d it a/22:17
mithroSmokingRope: under RFTS?22:17
mithroor Minisec?22:17
SmokingRopeon demo1 and linz's server most recently22:17
SmokingRopeearlier today22:17
mithrohrm... you'll have to poke llnz22:17
SmokingRopellnz: ping?22:17
mithrohe isn't on at the moment :)22:18
SmokingRopelol22:18
SmokingRopei guess it's not a server crash, but the client locks up on the 'downloading orders' step of a universe update22:25
mithroSmokingRope: oh22:39
mithroare you running from git?22:39
*** Lukstr has quit IRC22:42
*** Lukstr has joined #tp22:48
SmokingRopemithro: nope, just the source forge version22:52
mithroSmokingRope: ahh, known problem22:53
mithroI'm pondering releasing a 0.3.1.122:53
mithrowhich fixes that bug22:53
SmokingRopei got it all working from git, the installer is nice though23:39
*** DTRemenak has joined #tp23:45
*** llnz has joined #tp23:46

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!