*** tpb has joined #tp | 00:00 | |
*** ChanServ sets mode: +o tpb | 00:00 | |
*** Appleman1234 has joined #tp | 00:23 | |
*** mithro has quit IRC | 00:29 | |
*** mithro has joined #tp | 00:34 | |
*** Appleman1234 has quit IRC | 00:45 | |
*** bddebian has quit IRC | 00:50 | |
*** mithro has quit IRC | 00:53 | |
*** mithro has joined #tp | 00:57 | |
mithro | JLafont: ping? | 01:02 |
---|---|---|
*** bostonvaulte1 has left #tp | 01:04 | |
mithro | bah | 01:05 |
mithro | that should have been JLP | 01:05 |
JLP | mithro: pong | 01:06 |
mithro | still no email :( | 01:06 |
*** brennan_ has quit IRC | 01:17 | |
*** Appleman1234 has joined #tp | 01:20 | |
*** mithro has quit IRC | 01:27 | |
*** mithro has joined #tp | 01:28 | |
*** mithro has quit IRC | 01:40 | |
*** mithro has joined #tp | 01:41 | |
*** brennan_ has joined #tp | 01:46 | |
JLafont | mithro: pong | 01:46 |
mithro | was actually after JLP | 01:47 |
JLafont | ahh kk | 01:48 |
JLafont | np | 01:48 |
*** xdotx has quit IRC | 01:53 | |
*** brennan__ has joined #tp | 02:05 | |
*** brennan_ has quit IRC | 02:06 | |
*** Lukstr has quit IRC | 02:19 | |
*** JLafont has quit IRC | 02:40 | |
*** niphree has joined #tp | 02:46 | |
niphree | hello | 02:47 |
JLP | niphree: morning | 02:47 |
Epyon | niphree :) | 02:47 |
niphree | JLP: morning :] | 02:48 |
niphree | Epyon: hi :] | 02:48 |
*** brennan__ has quit IRC | 02:48 | |
Epyon | Change of the watch, then. I'm going to sleep ^_^ | 02:48 |
*** Appleman1234 has quit IRC | 02:48 | |
niphree | Epyon: sleeping is waste of time | 02:53 |
JLP | wow, KDE has 103 applications already | 02:57 |
llnz | JLP: kde had 50(+?) students last year | 03:00 |
JLP | llnz: if i remember corectly they had around 150 yes | 03:00 |
Epyon | niphree, aawriight. You convinced me xP | 03:02 |
JLP | :) | 03:03 |
Epyon | JLP, how many do we have up to now? :P | 03:03 |
JLP | 21 | 03:03 |
Epyon | That's still good :D | 03:03 |
llnz | niphree: hi | 03:03 |
JLP | Epyon: yeah not bed, and the quality is quite bit better then last year | 03:04 |
niphree | llnz: hi | 03:04 |
mithro | hey niphree | 03:05 |
niphree | hello mithro :] | 03:06 |
mithro | niphree: how have you been? | 03:16 |
niphree | mithro: good :) | 03:23 |
niphree | mithro: I want to finish metaserver - if that's ok | 03:27 |
*** Appleman1234 has joined #tp | 04:01 | |
*** peres has joined #tp | 04:18 | |
JLP | ahoy peres and Appleman1234 | 04:20 |
peres | hi JLP | 04:20 |
* peres live from macdonalds ;) | 04:20 | |
* JLP can't remember how many years have passed since he has been in mcdonalds | 04:22 | |
peres | JLP: i personally hate it, but it's the closest place to my home that have wireless connection :P | 04:22 |
peres | has* | 04:23 |
JLP | ah the wireless, that's a different story then :) | 04:25 |
mithro | niphree: we are happy to have you work on it | 04:29 |
*** Appleman1234 has quit IRC | 04:35 | |
*** Appleman1234 has joined #tp | 04:53 | |
*** Appleman1234 has left #tp | 05:21 | |
*** peres has quit IRC | 05:26 | |
*** Demitar has quit IRC | 05:50 | |
* llnz wanders off | 07:53 | |
llnz | later all | 07:53 |
*** llnz has quit IRC | 07:53 | |
*** tarck4am has joined #tp | 08:29 | |
JLP | ahoy tarck4am | 08:45 |
tarck4am | hi JLP :) | 08:49 |
tarck4am | I'm just lurking around :P | 08:49 |
*** tarck4am is now known as karol | 08:49 | |
JLP | karol: cool, feel free to ask any question you might have | 08:50 |
karol | I submited a proposal and wanted to ask if it's ok to use May 26 instead of your sugessted ISO date format -> why? imho it looks better :) | 08:52 |
JLP | karol: yeah it's also fine by me | 08:53 |
karol | JLP: I read some other proposals there were quite long ... could you point me in which direction should I extend my proposal (from the technical point of view)? | 08:57 |
JLP | karol: i'd try to put as much effort into schedule and try to make that as detailed as possible and as clear as possible what will be delivered and how to test that it work | 08:59 |
*** ryan__ has quit IRC | 10:59 | |
*** ryan__ has joined #tp | 11:08 | |
SmokingRope | morning | 11:14 |
JLP | SmokingRope: ahoy | 11:19 |
SmokingRope | JLP: have you read my proposal again? :) | 11:21 |
JLP | SmokingRope: which one was it again | 11:21 |
SmokingRope | 3D Game Client Using C++ and Ogre | 11:21 |
SmokingRope | http://easlnx01.eas.muohio.edu/~hannasm/TPProposal | 11:22 |
tpb | <http://ln-s.net/1k+2> (at easlnx01.eas.muohio.edu) | 11:22 |
JLP | SmokingRope: ah that one, i'll just go somwhere for a few minutes and when i come back i'll read it again just in case | 11:22 |
SmokingRope | i tried to address the scheme interpreter in this newest revision | 11:22 |
SmokingRope | i have read somewhere on the site that all of the ship design rules are written in scheme | 11:23 |
SmokingRope | and i don't believe there is a working C++ implementation for parsing TPCL yet | 11:26 |
mithro | SmokingRope: umm, tpserver-cpp has two such examples | 11:27 |
mithro | (one which binds mzscheme and one which binds guile | 11:27 |
SmokingRope | mithro: i've only read the text client so far | 11:27 |
SmokingRope | :( | 11:28 |
mithro | actually the scheme stuff is optional | 11:28 |
mithro | as the client can request that the server perform the calculation for it | 11:28 |
mithro | (it's just more annoying) | 11:28 |
SmokingRope | will there be many differences between the server-side scheme binding and the client-side binding? | 11:29 |
*** niphree has quit IRC | 11:32 | |
*** niphree has joined #tp | 11:34 | |
JLP | SmokingRope: looked at the proposal, looks very good to me | 11:45 |
SmokingRope | JLP: thanks for looking it over | 11:46 |
*** vi1985 has joined #tp | 11:49 | |
vi1985 | hello andrei, and everyone. sorry i'm late | 11:50 |
andrei | vi1985, Hello | 11:50 |
*** JLafont has joined #tp | 11:50 | |
*** JLafont_ has joined #tp | 11:51 | |
vi1985 | andrei: so are you still planning on having the q&a? | 11:51 |
vi1985 | andrei: or can I go to the lab and do my homework? :) | 11:51 |
andrei | vi1985, Heh, (I think perhaps you're making much more of a deal out of this than I intend to) I just want to ask a few questions | 11:52 |
vi1985 | andrei, ok shoot :) | 11:52 |
andrei | vi1985, Yeah.. I too am doing that. I have to teach a lecture on HMMs and CRFs.. such a pain | 11:52 |
vi1985 | :) | 11:52 |
vi1985 | andrei: just one request (for both of us), no flooding... just question --> answer format | 11:54 |
andrei | vi1985, So.. The dimensionality of the GA algorithm. You said 4D. Can you describe that space? | 11:55 |
vi1985 | andrei, that space would contain an encoded move-by-move sequence of actions for the AI player. I chose 4 dimensions, since it seems to be able to contain everything. | 11:57 |
vi1985 | andrei, | 11:57 |
andrei | vi1985, That's a description of the items in the space; not the actual search space. The important part for GAs doesn't refer to the items, but to the neighbourhoods | 11:57 |
andrei | vi1985, In the sense of. Can you give an example of how a point in this 4D space would look like (and its interpretation in terms of actions) + its neighbourhood function? | 11:58 |
andrei | (I want to apologize again, I don't want to come off as an ass.. I just know GAs well and have used them quite a bit. I don't mean to offend by these questions) | 11:59 |
vi1985 | andrei: no worries. what do you mean by the neighborhood function then? | 12:00 |
andrei | vi1985, Well what's a GA? It's a way of searching a space of solutions, good old heuristic search (still amuses me that AI was called that at one point). But to describe that space you have to describe both the items in it (which I am still somewhat unclear of) and the neighbours of each item | 12:01 |
vi1985 | andrei, do you mean the other specimen of "hypotheses" in each generation? | 12:03 |
vi1985 | * sorry, meant to say hypotheseis-specimen | 12:03 |
andrei | vi1985, For example in Z^2, the points are ZxZ (tuples of integers) and neighbours are defined as being a neighbour of both of the two numbers (<= 1 to each) | 12:03 |
andrei | vi1985, Hmm.. I'll phrase it differently. Give me a point in your 4D space and tell me what it means (for some simple hypothetical game) | 12:04 |
vi1985 | andrei, a point is an action z, in action class y, on planet or fleet w, in future move x. The whole 4d space is nothing but an encoding of sets of actions for future moves; this is why i call it a Hypothesis. it is not the place where you search for solutions. | 12:05 |
andrei | vi1985, More detailed, an actual point | 12:06 |
andrei | (I'm actually goinging somewhere with this by the way :) | 12:06 |
vi1985 | andrei, an actual point is a single action, which is a part of some game "hypothesis". i thought it was clear? | 12:06 |
andrei | vi1985, Yes, how do you encode that? | 12:07 |
andrei | vi1985, I mean.. they get to be strings of integers somehow | 12:07 |
andrei | vi1985, Give me an example of one of these strings | 12:07 |
andrei | vi1985, And what it would mean in a hypothetical game | 12:07 |
vi1985 | andrei, yes, unique encoding by integers. that's the fastest and easiest | 12:07 |
vi1985 | andrei, why is it an issue? | 12:08 |
andrei | vi1985, Yes, so.. give me an example of one :) | 12:08 |
vi1985 | andrei, ok, A = <7, 14, 53, 144> ?? how does that help? | 12:08 |
andrei | vi1985, Because the proposal omits the most important part of a GA. Which is the encoding of points in the space and their neighbourhoods | 12:08 |
andrei | vi1985, So.. how do you interpret those numbers? | 12:08 |
andrei | vi1985, As in.. 7 is what.. 14 would be what, etc? | 12:09 |
andrei | vi1985, (and no, I'm not doing this to be an ass :) sorry again if it seems that way. This really is the critical thing for a GA and the first thing one must ask) | 12:10 |
vi1985 | andrei, on future move 7, on planet or fleet 14 (i could make the fleets a separate dimension, but then again i decided to put them after the max. planet), in action class 53 (say, colonize. it's not a necessary dimension, but makes it easier for other purposes), the specific action 144, which is "colonize Terra". Was that not clear in my previous wording? | 12:11 |
andrei | vi1985, Ok (it was somewhat clear, but that's my issue) are you familiar with the mutation operators for GAs? | 12:13 |
vi1985 | andrei, didn't I specifically say that several (randomly selected) z-values are going to mutate (I chose a simple way of doing it; instead of cross-breeding alleles, I'll simply do single-organism breeding for the best fitting ones.) | 12:15 |
vi1985 | andrei, what is the issue with that? | 12:15 |
vi1985 | *sorry, not cross-breeding alleles... but crossing alleles between specimen | 12:16 |
andrei | vi1985, crossover and mutation are two different things | 12:16 |
vi1985 | andrei, ok, I inferred incorrectly. Where are you going with the argumen? | 12:17 |
*** mithro has quit IRC | 12:19 | |
vi1985 | andrei, are you still here? | 12:19 |
vi1985 | andrei, because I have prepared some questions for you. it's only fair, if we're doing this | 12:20 |
andrei | vi1985, Sorry, I'm writing my talk while doing this :) | 12:20 |
vi1985 | mhh | 12:21 |
andrei | vi1985, Yes, feel free to ask anything at any point :) I welcome questions from anyone any time | 12:21 |
andrei | vi1985, One of the very important aspects (and one of the hard bits) of GAs is to have meaningful operations on these strings. The issue I have is that your operators aren't meaningful. For example, a slight mutation in one particular part of that 4D representation has no meaning in terms of the tactics within the game | 12:21 |
*** Karol_ has joined #tp | 12:22 | |
*** Karol_ has left #tp | 12:22 | |
andrei | If I modify the 3rd integer by 1, that does not represent a slightly different solution. It represents an entirely different one | 12:22 |
andrei | Whatever fitness function you pick with this representation will not be smooth | 12:22 |
andrei | (and since GAs are basically following the gradient that's really important) | 12:23 |
andrei | I just want to make one thing really clear. Don't feel you have to ask me questions because I'm asking them. Ask anything you want any time you want :) I'm glad to answer questions | 12:24 |
andrei | (so, please, do ask :)) | 12:24 |
vi1985 | andrei, first of all, this can be layered further. secondly, choosing which planet to invade with how many troops will only slightly alter the situation when you have tens or hundreds. Then, if the algorithm you're suggesting is only good for narroeing down action-paths for one local maximum, then it's inherently bad for a game, where many different alternatives can be considered. | 12:25 |
andrei | vi1985, I haven't suggested any algorithm :) (nor do I intend to) | 12:26 |
andrei | vi1985, The issue is with actions, not with amounts | 12:26 |
andrei | vi1985, You have an integer representing an action. There are no meaningful operations on this | 12:27 |
andrei | vi1985, Any change will give you a radically different action | 12:27 |
andrei | (that 53 in your original example) | 12:27 |
vi1985 | andrei, but again, a difference of even 5 actions, "radically different actions", is something I intend to do, not something I intend to avoid, when searching for a solution. | 12:27 |
vi1985 | andrei, and this is why I want 3 types of populations for each seed: a conservative, normal mutation rate, and a risly one for newer solutions | 12:28 |
vi1985 | andrei, does that clarify things for you? | 12:29 |
andrei | vi1985, But the GA requires a somewhat smooth fitness function. You can't write a smooth fitness function for that encoding. Any change in the action axis will give such radically different results (by changing the meaning of the amount axis) that hereustic search won't work | 12:29 |
andrei | vi1985, Let me make it concrete | 12:30 |
andrei | vi1985, That might hepl | 12:30 |
andrei | vi1985, erm, help | 12:30 |
vi1985 | andrei, oh, it's not going to be heuristic search... I'm just going to _dumbly_ look at the outcome of a game simulation, and see how it compares to others. | 12:31 |
andrei | vi1985, GA is hereustic search | 12:31 |
andrei | A = <7, 14, 53, 144>; so.. <turn, target, action, amount> So let's ignore the turn and the target (and say they're optimal) | 12:31 |
vi1985 | andrei, that may well be, but I'm not going to hard-code them. instead, I'm going to leave it an open question for the evolutionary process to decide. | 12:32 |
vi1985 | andrei, I think I made that clear in the proposal? | 12:32 |
andrei | Now.. we have <action, amount>; say action 1 is "Build scouts", and action 2 is "Research doomsday weapon"; I think it's far to assume that the amount will refer to number of units, and number of turns for research to be done? | 12:33 |
andrei | vi1985, It's ok, let's try out this example (it will clear up this issue if we follow it through) | 12:34 |
vi1985 | andrei, ... that's not the way to decipher that specific encoding. you cannot mix build scouts, with research doomsday weapon, in the same action-category | 12:34 |
andrei | vi1985, But you said the representation was <some turn, some target, some action, some amount>; <0,0,1,0> and <0,0,2,0> aren't both valid? | 12:34 |
vi1985 | andrei, no. it's <some turn, on some of -your- planets or fleet, on some action> btw, I can easily have another dimension for "amount", if it's valid for the action. | 12:36 |
vi1985 | sorry | 12:36 |
andrei | vi1985, Oh, I'm sorry; that's just the example you gave above :) | 12:37 |
andrei | vi1985, Where does the 4th dimension come in? | 12:37 |
vi1985 | andrei, my mistake i forgot it now | 12:37 |
vi1985 | one sec | 12:37 |
andrei | vi1985, Sure :) | 12:37 |
vi1985 | <some turn, on some of -your- planets or fleet, some action category, some action> | 12:38 |
vi1985 | better | 12:38 |
vi1985 | andrei, I believe these are already implementation details you're wondering about, not general design? | 12:38 |
andrei | vi1985, Ok, so, what are your action categories? | 12:38 |
andrei | vi1985, They are not unfortunately | 12:39 |
andrei | vi1985, A genetic algorithm searches a space of solutions by looking at the gradient and heading that way (with a bit of slack since they get to mutate). This means that following that curve (the gradient) must be meaningful | 12:40 |
andrei | Which is a large part of what makes GA algorithms so hard to get right | 12:40 |
vi1985 | andrei, off the top of the bat i can think of a few: research, attack planet/colonize, attack fleet, etc... | 12:40 |
andrei | (and why there are many conferences on GA algorithms decades after their introduction) | 12:40 |
andrei | vi1985, Ok, I choose.. research and planet attack | 12:40 |
vi1985 | andrei, did you look at the code sample i posted? | 12:40 |
andrei | vi1985, Code samples? No, I have not | 12:41 |
vi1985 | andrei, it's very basic, but it shows very nicely that the basic idea here _works_ | 12:41 |
vi1985 | andrei, check it out, there's a link at the bottom of the wiki ;) | 12:42 |
andrei | vi1985, Ah, ok. That's different, and that works because of what I am saying is missing here :) | 12:42 |
andrei | vi1985, But let's go on | 12:42 |
andrei | vi1985, So.. I chose.. research and planet attack (R, A henceforth) | 12:43 |
vi1985 | andrei, what do you mean? | 12:43 |
andrei | vi1985, <0,0,R,?> <0,0,A,?> | 12:43 |
andrei | vi1985, Can you give me two examples that would fit into the question marks | 12:43 |
andrei | vi1985, Any two? | 12:43 |
andrei | vi1985, It'll be clear in a few moments after this example | 12:43 |
vi1985 | andrei, i was referring to the previous statement.. but nm. Ok, say 1 and 2? | 12:43 |
andrei | vi1985, I mean.. interpretations for them (like.. different types of attack, etc?) | 12:44 |
vi1985 | andrei, btw, these 1 and 2 are only meaningful in the context of the 3-rd dimension... sorry if it wasn't clear | 12:44 |
andrei | vi1985, Yeah, that's the problem :) | 12:44 |
andrei | vi1985, But it's ok. I'll make it clear why | 12:44 |
vi1985 | andrei, go ahead | 12:44 |
andrei | vi1985, What would <0,0,R,1> mean and <0,0,A,1> mean? | 12:45 |
andrei | (sorry for this. I like the Socratic method and aside from explainig it the gradient way I think this is the simplest) | 12:45 |
andrei | By the way, the link to your code example is broken :) (but I'm assuming you're maximizing a polynomial) | 12:46 |
vi1985 | andrei, again, R-1 and A-1 would have _different_ meaning. Actually, R would only have one action: "research next". Let's say that R-1 is research next. A-1, let's say it's "send a fleet to Alpha Centauri. | 12:46 |
vi1985 | andrei, is that better? | 12:46 |
andrei | vi1985, Ok, that's fine. (I understand they have different meaning) | 12:46 |
andrei | vi1985, So.. <0,0,R,1> means "More research" and <0,0,A,1> means "Attack Planet X" but... they're next to eachoter (since I said R was 1 and A was 2) | 12:47 |
vi1985 | andrei, cool. was that the core of the issue? Thanks a lot for pointing it out... I'll include it in the description on the wiki | 12:47 |
andrei | No :) that's not the issue, but I'm getting there right now | 12:48 |
vi1985 | andrei, yes, but a Hypothesis is _not_ a search space... | 12:48 |
andrei | vi1985, So.. if you pick a point and look at its neighbours of that point (let's say you have a perfect fitness function) | 12:48 |
vi1985 | andrei, ok, go ahead | 12:48 |
vi1985 | andrei, it would be meaningless, as you said before, but that is not a problem... | 12:48 |
andrei | vi1985, A hypothesis is not a search space, the space of all of them is though | 12:48 |
vi1985 | andrei, ok, yes | 12:49 |
andrei | vi1985, No. That's a huge problem. It means that even if your entire algorithm is perfect. Your fitness function is absolutely perfect. The GA will follow the gradient | 12:49 |
andrei | vi1985, But you're saying the gradient has no meaning | 12:49 |
andrei | vi1985, Let me say that in a simpler way | 12:50 |
vi1985 | andrei, ? I thought you meant its neighbours in the same hypothesis | 12:50 |
vi1985 | let me finish | 12:50 |
andrei | vi1985, Sure, go on :) | 12:51 |
vi1985 | andrei, the fitness function will will be there to evaluate results of simulations, _not_ to compare the nitty-gritty of each hypothesis to its neighbours. This is way simpler, and more elegant. The selection process happens "laisez fair". | 12:52 |
andrei | vi1985, Yes, that's fine | 12:52 |
andrei | vi1985, A simulation is a way of saying "What will happen in the future if I do this" | 12:52 |
vi1985 | andrei, yes | 12:52 |
andrei | vi1985, Let's from now on assume that your fitness function is absolutely perfect | 12:52 |
andrei | vi1985, Doesn't matter how it's implemented | 12:52 |
vi1985 | andrei, by fitness function we both mean evaluating a universe-snapshot? | 12:53 |
vi1985 | andrei, to see how well the player fares in the game? | 12:53 |
andrei | vi1985, I mean. A function that takes a hypothesis and tells me how good it is so that the GA knows how to rank them | 12:53 |
vi1985 | andrei, yes, so we're on the same page | 12:53 |
vi1985 | andrei, as we were 10 minutes ago :) | 12:54 |
andrei | vi1985, Ok. So now.. you have these two points. <R,1> and <A,1> (omitting the leading 0,0) | 12:54 |
vi1985 | ok | 12:54 |
andrei | vi1985, Now.. do you agree that all a GA does is follow a curve (and then on occasion poke outside to try to find better solutions?) | 12:54 |
vi1985 | andrei, that would be an emergent property, yes. And the "3 population" design would constantly make it look for new solutions. | 12:55 |
andrei | vi1985, No.. it's not an emergent property | 12:55 |
andrei | vi1985, A GA is designed to follow the gradient of a curve and look for minima and maxima (and add some randomness to try to escape local ones) | 12:56 |
vi1985 | andrei, all I'm hard-coding is natural selection... the optimization process would, in fact, be an emergent property | 12:57 |
andrei | vi1985, Ok, perhaps this is the issue | 12:57 |
vi1985 | andrei, ok? | 12:57 |
vi1985 | andrei, it seems you're poking around for the issue... | 12:57 |
andrei | vi1985, No.. I know the issue. Your search space is ill defined for GAs because it can't be smooth and your gradient is meaningless in the action dimension | 12:58 |
andrei | vi1985, The fact that a GA uses evolutionary terminology has no bearing upon what it does or what it was designed to do. It is very simply an optimization method for functions. | 12:58 |
andrei | vi1985, It is essentially equivalent to following the curve of a function and sometimes probing distant parts hoping for a better solution | 12:58 |
andrei | vi1985, But in your search space that curve has no meaning | 12:59 |
vi1985 | andrei, no, the gradient _will_ in fact be smooth, since changing an action would only slightly impact the overall result in a big game... | 12:59 |
andrei | vi1985, Because putting research next to attack, even with a perfect fitness function, doesn't mean anything. They're simply two different solutions | 12:59 |
vi1985 | andrei, it is not _meant_ to mean anything... the whole n-D space is just a way to uniquely encode actions. | 13:00 |
andrei | vi1985, Not at all. If I decide to not attack my enemy this turn and instead to research; that has a huge impact on the game | 13:00 |
andrei | vi1985, But that's by far the most important part. The part you're dismissing as just encoding is what entire GA conferences get to talk about :) | 13:01 |
andrei | vi1985, The reason GAs work is because a small change in a solution brings you a little bit closer to a better solution (or worse) and you can be somewhat sure that if that change wasn't good there won't be an incredibly good solution right next to it | 13:02 |
vi1985 | andrei, it will _definitely_ take some effort to make individuals mutate _smoothly_... this is one of the main challenges. The curve will not be as unpredictable as you make it to be, since a single action is not that important. | 13:03 |
andrei | vi1985, Your search space doesn't have this property :) (and if you want, you can prove this to yourself by drawing part of it out) | 13:03 |
vi1985 | andrei, no, a single action will not have that much of an impact on the overall outcome | 13:03 |
vi1985 | andrei, plus, I will have a conservative population, and on top of that, preserve solutions between generations and turns... | 13:04 |
andrei | vi1985, How you write the GA doesn't matter in this case | 13:04 |
andrei | vi1985, Assume it's the perfect GA for this problem | 13:04 |
vi1985 | andrei, ok, assumed. | 13:05 |
andrei | vi1985, The issue is with the topology of the search space. Let me give you another example of the search space. Maybe this will make this clearer: <attack planet, Planet X> (<1,1>) <attack unit, unit Y> (<0,1>) <attack unit, unit Z> (<0,0>) | 13:06 |
andrei | ow.. unit Y can be a horrible unit to attack, so bad indeed that you would be throwing away your unit doing it (or it can be far away | 13:06 |
andrei | ) | 13:06 |
andrei | But attacking planet X can be good because you can take it. But unit Z is threatening your capital and unless you kill it you're a goner | 13:06 |
andrei | Here are 3 points in your search space, they're next to eachoter. The gradient beween them doesn't actually mean anything | 13:07 |
andrei | For a GA to work what has to have meaning is the gradient, that's all :) | 13:07 |
andrei | Everything else is just a series of small details | 13:07 |
vi1985 | andrei, natural selection will take care of that thinking process, since it's not considering it move-by-move, but as a _whole_ sequence. | 13:08 |
andrei | vi1985, But it doesn't matter if you consider it as a whole sequence or move by move. Please stop calling it natural selection, it's a mathematical optimization algorithm (natural selection implies much more than following a curve) | 13:08 |
andrei | vi1985, The issue is that it follows a curve, that's it (mostly) and your curve doesn't encode information | 13:09 |
andrei | vi1985, It's what everyone new to GAs does; you just have to figure out a way to make the changes in your search space meaningful. Not the search space itself | 13:09 |
vi1985 | andrei, your objection is simply void, since you did not get this issue. it is natural selection, since if you look at the design documents, you will see that this is _really_ what I'm hard-coding. Maybe we have a terminology issue? | 13:10 |
andrei | vi1985, You're writing a genetic algorithm to solve something. Natural selection is a biological process through which species evolve in the real world. Genetic Algorithms are a branch of mathematics which optimize functions | 13:10 |
andrei | vi1985, They are entirely separate | 13:10 |
andrei | vi1985, And have very little to do with eachoter | 13:11 |
andrei | vi1985, (aside from the horrible choice in terminology which leads to confusion) | 13:12 |
vi1985 | andrei, natural selection is a process by which fitter individuals survive to reproduce. Perhaps it would be better to simply call it an "evolutionary algorithm" instead; I wasn't aware I was abusing notation. | 13:13 |
vi1985 | *terminology, not notation | 13:13 |
andrei | vi1985, It is not an abuse of terminology. Natural section cannot be programmed. It is a biological term for a process which happens in the real world. It has nothing in the least to do with computers | 13:14 |
andrei | vi1985, Evolutionary algorithms are a discipline of mathematics (and CS) and Genetic Algorithms (which is what you propose) are part of that | 13:14 |
vi1985 | andrei, what about various artificial-life simulators? | 13:14 |
vi1985 | andrei, this is -exactly- what happens there... as far as I know, they are within the set of GA solutions. | 13:15 |
andrei | vi1985, Those are different, they do no rely on evolutionary algorithms (nor do they encode information the same way generally). They rely on the theory that was originally developed for cellular automata | 13:15 |
vi1985 | andrei, not always; most of the better ones today aren't cellular anymore, and in fact they have hard-coded neural-net intelligence evolution, and phenotype evolution... | 13:16 |
andrei | vi1985, That is entirely different from this. Genetic Algorithms do one specific thing. That's all. They follow that curve and sometimes poke around | 13:17 |
vi1985 | andrei, natural selection can be easily coded... it's just an issue of how good it is, relative to the task | 13:17 |
andrei | vi1985, Natural selection is not an algorithm. It is a set of biological principles | 13:18 |
andrei | vi1985, It has nothing at all to do with CS. Only mathematical optimization matters in algorithms | 13:19 |
vi1985 | andrei, if you're right, it was a misnomer to call my design that. For all that I've read, I wasn't aware that this is the case. But even in that case, it does nothing to detract from the merits of the solution. | 13:19 |
andrei | vi1985, You're implementing a genetic algorithm to solve a particular problem | 13:19 |
andrei | vi1985, No? | 13:19 |
vi1985 | andrei, natural selection is an algorithm. Biologists are still trying to figure out the fitness function, and the genotype. Now, for all the research that I've done, I became more and more convinced that what I'm doing is in fact GA. I do not see enough evidence at the moment to rename it, but thanks to our discussion i'll dig around again, to see if there are any inconsistencies. | 13:22 |
andrei | vi1985, Wait | 13:22 |
vi1985 | andrei, your criticism, as far as i can see, was trying to map my design to some preconcieved notion youhave | 13:22 |
andrei | vi1985, The algorithm you describe. Whatever you call it, is a genetic algorithm | 13:22 |
vi1985 | andrei, i'd like to think so. do you? | 13:23 |
andrei | vi1985, It's not a matter of thinking it :) That's what it is if you read through it. Now.. GAs have a large body of mathematics and a few decades of work (I think about 3 at this point) on them | 13:23 |
andrei | vi1985, And that tells us that GAs are essentially gradient-descent algorithms, which means what matters is the gradient of the search space. I'm not sure how I can say it :) Your algorithm is of a class that's been explored and doesn't fit well with your search space | 13:24 |
andrei | vi1985, I can show you appropriate books, or literature that can explain this more formally. (or I can do so, the math is easy) | 13:25 |
andrei | vi1985, Or point you to a GA tutorial that's written by a real researcher in the field explaining this. (Koza if I remember right has some, well anyway, his book was good) | 13:26 |
vi1985 | andrei, I haven't studied it formally, if that's what you mean to highlight. In fact, it's written plainly in my proposal. Your criticism, again, refers to your failure to map my design with a template. | 13:26 |
andrei | vi1985, No.. not at all | 13:27 |
vi1985 | andrei, what gives me the confidence in the design, is the code that I have already written, which successfully simulates natural selection, and which works, and works great. | 13:27 |
andrei | vi1985, My issue is that your algorithm happens to be part of a widely studied mathematical field and that 30 years of work on that have shown that it's a gradient descent algorithm and that the curve matters. Not the space. | 13:27 |
andrei | vi1985, If you're referring to your example code. (which I'll assume was maximizing a polynomial) of course it works :) The gradient has meaning there | 13:28 |
vi1985 | andrei, the curve definitely matters. I have said it to you many times here; you claim that the curve will not be smooth, and I claim it will be smooth enough. | 13:29 |
vi1985 | andrei, the code example isn't about maximizing a polynomial. download it and see for yourself | 13:29 |
vi1985 | andrei, if we agree to disagree, i have a few questions for you | 13:30 |
andrei | vi1985, But.. there's no smooth enough. It's the fact that the gradient must have meaning. It must be meaningful to want to head in a certain direction | 13:30 |
andrei | vi1985, There's a clear mathematical definition for this | 13:30 |
andrei | vi1985, And your search space doesn't satisfy it :) | 13:30 |
andrei | vi1985, I'm not saying it can't work. I'm saying that you should look more closely at all the incredible amount of research that has been done on GAs because I have (I've read quite a few books and papers and plan to use some in my PhD research) and you're missing a key component | 13:31 |
andrei | vi1985, (well, it can wor if you change your search space) | 13:31 |
vi1985 | andrei, if it's reasonable to approximate it with a smooth graph, then it's smooth enough, and has meaning. My working hypothesis is that from the hundreds of actions taken each turn, changing 4 or 5 of them will not radically change the situation. | 13:32 |
andrei | vi1985, It is not smoothness that matters. It is that the change must mean something | 13:32 |
andrei | vi1985, Smoothness is a result of that change being meaningful | 13:33 |
andrei | vi1985, Not the other way around | 13:33 |
vi1985 | andrei, in natural selection it's exactly the other way around. And it has been successfully implemented in artificial-life simulation. | 13:34 |
andrei | vi1985, This is not natural selection. It is a mathematical optimization technique | 13:34 |
andrei | vi1985, That happens to have chosen the same terminology | 13:34 |
vi1985 | andrei, perhaps, again, it's a misnomer on my side, and I can call it something else to avoid possible confusion. | 13:35 |
andrei | vi1985, But it doesn't matter what you call it. That's the algorithm you're actually implementing :) | 13:36 |
andrei | vi1985, But it's ok. I can send you something to read on the subject that will hopefully shed some light on optimization and this class of algorithm. If you want | 13:36 |
andrei | vi1985, Sorry. We've been doing this for 1.5 hours and.. I have work to do :) Did you want to ask me any questions? | 13:36 |
vi1985 | andrei, all I'm hard-coding is a natural selection process. If it does not meet the rigorous definition for GA, then it's not consequential at all if I rename it. | 13:37 |
andrei | Also, again. Feel free to ask anything, at any time at all :) | 13:37 |
vi1985 | andrei, do you have 15 mins? | 13:37 |
andrei | vi1985, You are not implementing natural section (if you visit a biology professor and tell them that they might scream at you). It meets the exact definition of a GA. And that just happens to tell us that the search space you are describing doesn't work :) | 13:38 |
andrei | vi1985, Sure :) | 13:38 |
vi1985 | andrei, I am implementing natural selection, as do other other algorithms (e.g. artificial life simulators)... but we agree to disagree. | 13:38 |
vi1985 | ok | 13:39 |
vi1985 | andrei, so you seem to want to use alpha-beta pruning for your game-trees? | 13:39 |
andrei | vi1985, Only in the endgame when there are only 3-4 moves left | 13:39 |
andrei | vi1985, And the game is trivial | 13:40 |
vi1985 | andrei, and there is only one opponent left? | 13:40 |
vi1985 | andrei, what if the endgame is with 3 opponents? | 13:40 |
andrei | vi1985, (so I only have say 1 planet, 1 ship and can only see a few enemy ships around) | 13:40 |
vi1985 | andrei, so how do you define "endgame"? | 13:41 |
andrei | vi1985, Well, if it's trivial (so if the tree has some low branching factor) it'll kick in alpha-beta pruning, otherwise it won't | 13:41 |
andrei | vi1985, Say.. the game tree has a branching factor of 10 | 13:41 |
andrei | vi1985, (which I randomly chose right now :P) | 13:41 |
andrei | vi1985, You can set whatever value you want, it depends on how good you want the AI to be in holding out right at the bitter end and how long you want that to take. | 13:42 |
vi1985 | andrei, ok, but will your AI "kick it in" even if there are multiple opponents left? | 13:42 |
andrei | vi1985, If the game tree has a branching factor less than a particular set value | 13:42 |
andrei | vi1985, Which means, if there are fewer than that many different things to look at in the game tree at every step | 13:43 |
andrei | vi1985, It's a much more robust definition than looking at how many oponents there are | 13:43 |
vi1985 | andrei, ok. first of all, how does your alpha-beta decision tree going to account for multiple opponents, when, by def, it is built for a game of 2? | 13:43 |
andrei | vi1985, And the number of opponents doesn't say anything. One opponent can have 20 ships right next to me which can make the game tree way more complicated | 13:43 |
andrei | vi1985, It's the endgame. If this ever comes around the AI is already dead. It'll just break down everything into itself and opponents | 13:45 |
andrei | vi1985, Ignoring allies (doesn't matter, it's dead in a few moves because it has so few units) | 13:45 |
vi1985 | andrei, so you want to go by the Paranoid Hypothesis? | 13:45 |
*** jphr has joined #tp | 13:46 | |
vi1985 | andrei, since this is, in 99% of the cases, the wrong heuristic to use | 13:46 |
vi1985 | andrei, as it is highly unlikely all players have ganged up on you. | 13:47 |
andrei | vi1985, Paranoid hypothesis? You mean. Will it assume while doing alpha-beta pruning that things can go as wrong as possible? | 13:47 |
andrei | vi1985, By the way, paranoid hypothesis is not a term used in the literature | 13:47 |
vi1985 | andrei, it will cluster all other opponents as one, and treat them as if they're ganging up on you. | 13:48 |
vi1985 | this is the only way to keep it true to def of alpha-beta | 13:48 |
andrei | vi1985, Sure. It doesn't matter. The only time this applies is as at the end as the AI is only 2-3-4 moves away from death | 13:48 |
andrei | vi1985, And it has almost no units, and almost no planets left | 13:49 |
andrei | vi1985, (even having 2 units means your branching factor goes to 18+ it's irrelevant) | 13:49 |
vi1985 | andrei, so you're dooming your AI from the start? Suppose your other strategy is successful at bringing it close to endgame, and then it switches to a completely misguided heuristic, and loses? or reverts back to a worse situation, at which point the other module takes over? | 13:50 |
andrei | vi1985, The start? This only applies when the AI has under 1 unit (and can't build anything) or no units and can maybe build something. | 13:51 |
andrei | vi1985, It no longer matters at that point | 13:51 |
andrei | vi1985, Anyway, the alpha-beta pruning was put in there to give me a way to test out the algorithms in a controlled way (not in a real AI). And it just seems reasonable to use it right at the end. | 13:52 |
andrei | vi1985, The rationale is when you have no units, being paranoid is the only way to surive (since anyone that notices you will kill you off) (or when you have just 1 and don't have the ability to build more) | 13:53 |
vi1985 | andrei, this seems like redundant overhead; sure, if you want to use it as scaffolding in your design process, then it can do ok for testing that everything else works. But why even bother throwing it in, if you supposedly have a superior algorithm? | 13:54 |
vi1985 | andrei, I remember at one point on IRC you said it will be more central than you're claiming now. have you made considerable changes to design? | 13:55 |
andrei | vi1985, The alpha beta pruning? I don't think I said that. No the design has been the same since the start :} | 13:55 |
andrei | vi1985, You're welcome to see my git logs :) (I keep all my work in a large git repository) | 13:56 |
vi1985 | andrei, you actually said something to that effect, but I won't insist. Besides, it's very hard to tell from your design document just how central you want it to be. | 13:57 |
vi1985 | andrei, I also had a question about your strategy modules | 13:57 |
vi1985 | andrei, you mention an ability of one module to generically call another. So, if your highest strategy module decides on a course of action, just how will it know the actual cost involved, if there is only top-down communication? | 13:59 |
vi1985 | let me elaborate | 13:59 |
andrei | vi1985, If you claim I said it. I'd like to see the quote :) (saying you won't insit is the equivalent of saying that I'm dishonest and that you're right but it's not worth pursuing). The only time I mentioned alpha beta pruning I said that I want to use that and an inference engine and explore the tree using the strategy modules. | 14:00 |
andrei | vi1985, But go on :) | 14:00 |
vi1985 | andrei, no, it's not. it's equivalent to saying that I remember something to that effect, but I won't hold you responsible, since I may have not understood it correctly. | 14:01 |
andrei | vi1985, Okies :) go on (I looked at the logs, I only mentioned alpha beta pruning once on IRC aside from today and that was in the context above) | 14:02 |
vi1985 | andrei, for example the highest-order strategy module decides to take over sector A and not B. It sends orders to modules underneath, to get resources, to prepare starfleets etc. But until this process trickles down to the basic actions, it may well be much costlier than taking over sector B. The point is that your top-level modules will have no way of knowing the cost. | 14:03 |
andrei | vi1985, Sorry, I just don't like it when people say I've made a mistake and don't explain what it is. It's a pet peeve :P But one that works really well with math and CS (don't mean to imply anything about you) | 14:03 |
vi1985 | andrei, and if you plan on bottom-up inter-module communication, how are you going to prevent it from being brute-force? | 14:04 |
vi1985 | andrei, I was just explaining my rationalle for the emphasis on alpha-beta trees in your design | 14:05 |
vi1985 | andrei, because it seems to me, that this can lead to sluggish and bizzare behaviour. | 14:05 |
andrei | vi1985, In your example that module wouldn't pick A or B. It would propose either A or B and other modules are applied to (plan A) and (plan B) until one of them seems better (or it stops looking ahead). | 14:06 |
vi1985 | andrei, ok, that's one way of finding out the cost. But how are you planning on not making it brute force search through an unknown tree-depth? | 14:07 |
vi1985 | andrei, because if you're going to decide at bottom level, a) it may take a while when you reach it, and b) you have no idea if and when you reach it at all | 14:08 |
andrei | vi1985, Buttom level? There's no bottom level. The seach stops through cutoffs | 14:08 |
* danderson glazes over at the uber intricate debate on AI techniques | 14:09 | |
vi1985 | as far as I understand, each module is responsible for controlling other modules in a heirarchical manner, until that heirarchy stops at the action-level | 14:09 |
andrei | vi1985, The same way that chess engines work, for example. They have cutoffs. Hereustics for weighing parts of the tree that are so far unexpanded. And they focus on what is theresting. | 14:09 |
vi1985 | andrei, right, but you will never know the actual cost until you have analyzed the leaves of that tree, since unlike chess, your tree is a heirarchy of orders. | 14:11 |
andrei | vi1985, Not at all. Modules are not a hierarchy. They're actually a graph. There's a tree that gets expanded as modules wish it to be. A few modules are deemed the initial ones and they first expand which part of the tree is interesting. | 14:11 |
andrei | vi1985, Hmm.. modules are just hereustics which get to expand the tree, and when they get to a bit of the tree they're not designed to handle they know how to give it next to | 14:12 |
andrei | vi1985, It's just like chess :) (except their weight functions get to be a bit more global, and a lot more complex) | 14:12 |
Ohm | there are some pretty damn good dissertations on this kind of stuff | 14:12 |
Ohm | search around for othello AI:s, their structure and logic is really important, while not being especially bogged down by rules. Make good case study | 14:13 |
andrei | vi1985, An 'order' is just a module deciding that a path through the game tree is interesting and expanding it :) | 14:13 |
Ohm | But it sounds like you know all about it | 14:13 |
andrei | Ohm, I haven't looked at othello AIs (mostly because I don't know how to play). I have read some papers on chess/go/poker AIs | 14:13 |
Ohm | Hmm, I believe developing an AI for go and for othello would be extremely similar | 14:14 |
andrei | Ohm, Really? It's that much of a pain? ouch.. | 14:14 |
Ohm | just remove the fact that stones flip other stoned and expand the board a lot, and you've made othello into go | 14:14 |
Ohm | oh, and change score count rules a bit - area instead of stones | 14:14 |
andrei | Ohm, Funky. Maybe I'll learn how to play it then. I have difficulty finding go players here | 14:15 |
Ohm | othello is a very simple game, and takes VERY little effort to learn | 14:15 |
Ohm | like 5 minutes tops | 14:15 |
andrei | Ohm, Awesome. Thanks. I'll take a look later tonight :) | 14:15 |
vi1985 | andrei, frankly, I'm not sure how to understand this graph --> tree mapping. At base, you will need a tree to look at your options. So perhaps you're saying the modules will create branches of this tree, and hand it over to other modules if they're stuck. but you will again never know the cost until you reach the leaves. | 14:15 |
andrei | vi1985, Is there anything else you wanted to ask? :) | 14:15 |
Ohm | you usually have a small amount of possible moves, so it's good on memory aswell :) | 14:15 |
andrei | Oh there we go | 14:16 |
vi1985 | brb | 14:16 |
andrei | vi1985, There exists a game tree. And I can apply a predicate 'expand-action' on any node. Given some action, I can expand that node to see it's consequences. I choose a few modules and say "I want an answer so what to do next". Those modules each find a different branch of the tree interesting, they go and expand it. Then they rearch a point where their hereustics tell them to expand with a different module, and they do that. The tree is someth | 14:18 |
andrei | ing that gets used by them, there's no mapping between modules and tree. | 14:18 |
andrei | Ohm, Do they have the whole stone handicap thing, like in go? | 14:18 |
andrei | vi1985, That's why I say the central part of the algorithm isn't the tree. It's the modules. It's just like you would wirte a normal AI. But instead of hardcoding the rules, they're encoded in the game tree. | 14:20 |
andrei | vi1985, (I shoud clarify, I say that in the proposal) | 14:21 |
Ohm | andrei: no, no handicaps in othello | 14:21 |
Ohm | I think | 14:21 |
Ohm | and if there are, they consist of points, not stones | 14:21 |
andrei | Ohm, That's my favourite thing about go. Two people can always play regardless of their relative skill levels. | 14:22 |
vi1985 | sorry, phone call | 14:23 |
Ohm | andrei: sorry, gotta go | 14:23 |
Ohm | but yeah, I also played go a bit | 14:23 |
Ohm | few years ago | 14:23 |
Ohm | anyways bbl | 14:23 |
andrei | Ohm, See ya :) | 14:24 |
vi1985 | andrei, right, but still at base you're exploring the game-tree using those modules. How can any module decide on an "interesting" direction, if it has no way of actually knowing the cost of those actions? And if, as I said earlier, a module decides to explore A and not B, it will have no idea it will be cheaper than the alternative, unless it actually traverses to the leaves (with help of other modules, perhaps) and f | 14:26 |
andrei | vi1985, It does have cost functions. I never said modules were devoid of any logic. Just that the rules are abstracted out | 14:27 |
*** brennan_ has joined #tp | 14:28 | |
vi1985 | andrei, but this is the whole point of having such a generic design. If you're going to give them heuristics to go by, why not discard it altogether, and simply have heuristical micromanagement? is this what your design boils down to? | 14:28 |
andrei | vi1985, Traversing to the leaves of a game tree means exploring a path until the end of a game | 14:28 |
andrei | vi1985, That is actually an infinite path in #tp | 14:28 |
vi1985 | andrei, no, that is _definitely_ not what I mean. I mean peeling layers of abstraction, until you reach specific actions, | 14:29 |
*** ryan__ has quit IRC | 14:29 | |
vi1985 | . | 14:29 |
andrei | vi1985, There are no levels of abstraction. The game tree is composed of actions | 14:29 |
vi1985 | andrei, didn't you say that these modules will be sets of abstracted rules? and that a module can call other modules if necessary? in that case, you get a tree (not necessarily a roted tree), with decreasing levels of abstraction as you go down. A module can have, e.g., 5 leaves, and 6 sub-trees, where it calls other modules. | 14:32 |
andrei | vi1985, You are implying this system can play a game without any background knowledge. Not at all, that would be a very odd claim for anyone to make. There are strategies that modules can use to explore. That's why they're called strategy modules. So a module can see if it is beneficial to do research. This means expanding a node in the tree that chooses to do research and then applying other modules to see if it's viable. The same way any AI wo | 14:32 |
andrei | rks but without hardcoding the rules. | 14:32 |
vi1985 | rks? | 14:32 |
andrei | works. IRC cut it off :) | 14:32 |
vi1985 | censurship :) | 14:33 |
andrei | vi1985, (well, I don't know if it's my client or freenode) | 14:33 |
andrei | vi1985, Heh. Yeah. Abstracted rules, like "It's a bad idea to do research when an enemy can reach a weak planet in a few turns, build instead" Those don't imply much about the game rules | 14:33 |
vi1985 | andrei, ok. Sure, but why make modules whithout hardcoding heuristics into them? | 14:33 |
andrei | vi1985, I never said I won't have weight functions to decide what's likely to happen. Just that they'll be general and rule-independent, the rule-dependent bits are done by the game tree. | 14:34 |
andrei | vi1985, Note that the examples I gave actually had logic in them. One of them decided that a unit had to be moved and called more primitive ones to find paths. | 14:36 |
andrei | vi1985, All I want to do is eliminate the need to hardcode ideas like terrain features in the AI. I don't want to remove the general reasoning about terrain features. That would be quite the claim :) | 14:37 |
vi1985 | andrei, it doesn't matter where your weight assesment functions come from (can be from library, hard coded, etc. It's really not the point). The point is, that you will never know the real weight of an inner-node in that abstractio-layer tree, until you actually see what is the sum-weight of all the individual actions such a route would entail. Thus, you cannot simply weigh it until you really know the sum-weight of th | 14:37 |
vi1985 | andrei, right, and all I'm saying is that you never know the "cost" of a particular strategy, until you have the sum-weight of the leaves trickle up to the node. | 14:39 |
vi1985 | andrei, which is brute search. | 14:39 |
andrei | vi1985, No AI does; all AIs use hereustics to decide when to stop | 14:39 |
andrei | vi1985, Including humans :) | 14:39 |
andrei | vi1985, All I want to do is implement the same rules normal AIs do, but in terms of a game tree to represent possible moves, instead of hardcoding the possible moves in the AI. Your criticism applies to every AI (aside from brute force) | 14:40 |
vi1985 | andrei, but such a generic approach, that says "i'll let a module decide if it wants to use other modules", has the danger of going very deep, and very wide into the abstraction-tree | 14:40 |
vi1985 | andrei, no, it applies to the approach that lets a module choose another modules to take some of the action. this is a very slippery slope to brute force! | 14:41 |
vi1985 | andrei, your design would work perfectly for designing a cooperative AI (as you have mentioned!), but when there is no "smart overseer", it can result in terrible recursion... | 14:42 |
*** Epyon has quit IRC | 14:43 | |
vi1985 | andrei, and you cannot hard code a smart overseer, and still maintain the generality of your design (which, i assume, it's main feature) | 14:43 |
andrei | vi1985, You're assuming that it would expand nodes randomly. It would not. The same way you decide that to move a unit you only care about A* + the weight being how likely you'll get attacked and then have a likelyhood function, that's the same thing you write here | 14:44 |
andrei | vi1985, Not every module must be entirely ruleset independent. You want as much to be reusable (like A*) as possible, but some things must have game knowledge | 14:44 |
andrei | vi1985, If the AI was ruleset independent entirely that would assume it can find new strategies on its own | 14:45 |
andrei | vi1985, So if you change rules, yeah. You might have to swap some modules in or some modules out, but on the whole most of them can stay because the actual strategies are mostly the same. (and the really bad ones won't be explored. Like if your ruleset doesn't support research, well.. having the research modules in makes no real different aside from a bit of wasted CPU time) | 14:46 |
vi1985 | andrei, i stated before that i understand that your strategy modules would randomly assign tasks to other modules. And i understand that the weight functions can come from external libraries (as well as some other restrictions). This is not what i'm saying. what i'm saying, is that in order for a strategy to even begin to guess its weight, it needs to know when the recursion would terminate; that is, when no more modul | 14:48 |
andrei | vi1985, No. They would not randomly assign tasks, not at all. | 14:48 |
vi1985 | andrei, this is not what i'm saying :) | 14:49 |
andrei | vi1985, You said "i stated before that i understand that your strategy modules would randomly assign tasks to other modules." | 14:49 |
vi1985 | andrei, sorry, typo | 14:49 |
vi1985 | disregard it :) | 14:49 |
andrei | vi1985, There's no need for it to go all the way down. It can guess (as does every AI, implicitly or not). For example, if you were to write a normal AI for a game, you would implicitly make some guesses. In freeciv for example certain types of threats are not considered when deciding what a safe path for a unit is. | 14:51 |
vi1985 | andrei, so basically, as i see it, your design allows for any module to call any other module for assistance. that means, that "inner nodes" will could not even _begin_ to estimate how many modules will be called, and _which_ modules will those be, and decide on an appropriate heuristic. | 14:52 |
vi1985 | andrei, in the case where you can guess which mudules will be called for which actions, you will need gigabytes of logs to come up with appropriate heuristics... | 14:53 |
andrei | vi1985, Call implies a model of execution where modules pass the current instruction pointer over from one to another | 14:54 |
*** brennan_ has quit IRC | 14:55 | |
andrei | vi1985, The modules are arranged in a graph. The AI system explores parts of the graph as needed (the graph is represents data dependencies between modules). I see no reason to let modules look at neighbours in that graph. A module can expand part of the tree if it wants. There's no reason why a module shouldn't just guess at a weight of a particular node. | 14:55 |
andrei | erm remove that is :p | 14:56 |
andrei | vi1985, Of the reasons why I put them in a graph is so that you can keep track of depth | 14:56 |
andrei | vi1985, And one of the reasons I said modules use the game tree and that's just one tool not the central focus is because of estimates | 14:57 |
vi1985 | andrei, but those instructions must be abstract, since in the opposite case the module could execute them itself. That brings me back to the previous point that either a) a "smart overseer" is needed, or else b) you need to document which heuristic can be applied to which combination of action-heirarchies. In the opposite case, you have no way of even guessing the actual actions it will require to perform something | 14:58 |
vi1985 | andrei: the graph size will then be O(n^(n-1)) at worst case, to accommodate for the combinations. | 14:59 |
andrei | vi1985, No, not at all. Every module can be as abstract or not as the writer wants. | 15:00 |
vi1985 | andrei, right, i said before that it can make several concrete actions, and pass some abstract instructions to others. | 15:00 |
andrei | vi1985, Not at all, the graph will be mostly linear. You're assuming again that modules can call any other modules. Ok. Maybe this will help explain. You write a function that moves a unit for a game. You don't put into that function information about research or the economy. You just want to move a unit. | 15:01 |
vi1985 | andrei, i'm not even looking at worst-case scenario... what i'm saying that in principle it will be very hard for modules to evaluate their actions, not knowing how many other modules it will need to call, and which ones will there be. | 15:02 |
andrei | vi1985, If you can write an AI for a game like civilization and have it evaluate how good an action is, you can do the same here | 15:02 |
andrei | vi1985, And write is as rule independent as you want :) | 15:03 |
andrei | vi1985, erm it as rule independent | 15:04 |
vi1985 | andrei, but there it uses simple heuristic micromanagement, which is not what you're proposing. You're claiming that modules can delegate abstract orders to other modules. The criticism is that in a complex scenario (say, "take over sector C"), most inner modules will have no knowledge of the actual course of action, since they will be higher up in abstraction, and will thusly have no knowledge which modules the next o | 15:08 |
Ohm | the less rule dependent the better, as long as it gets finished, right? | 15:08 |
andrei | Ohm, Yeah :) exactly. And because it's in a graph you can do neat things like decide in a ruleset unit moving shouldn't take into account some factor really easily. | 15:10 |
vi1985 | Ohm, working designs are top priority as far as I know. Rule independence, or the infrastructure for it, is nice but secondary. | 15:10 |
vi1985 | andrei, well, we agree to disagree then. let's keep it at that :) | 15:11 |
vi1985 | i need to get to the lab and do some work before wall climbing this evening... | 15:11 |
Ohm | k | 15:11 |
* vi1985 does not like dueling. | 15:12 | |
andrei | vi1985, Heh, very well | 15:12 |
Ohm | i am healing up a muscle inflammation | 15:12 |
andrei | Ohm, Ouch, I hate those | 15:12 |
Ohm | Really want to get back to doing real exercise | 15:12 |
vi1985 | andrei, take care. No hard feelings on this side of the channel :) | 15:12 |
Ohm | right now there's very little I can do for heavy cardio, unless I spend the huge amount of money to get to a ymca | 15:13 |
andrei | vi1985, Heh, no problemo :) I'll gladly answer more some other day | 15:13 |
*** vi1985 has quit IRC | 15:13 | |
Ohm | and according to the medical person I spoke to, I shouldn't be lifting weights either | 15:13 |
andrei | Ohm, Ouch, which muscle? | 15:13 |
Ohm | connection between torso and right leg | 15:16 |
Ohm | inner | 15:16 |
Ohm | I don't remember it's name | 15:16 |
andrei | Ah, yeah.. those aren't any fun. | 15:20 |
*** Iwanowitch has joined #tp | 15:21 | |
Ohm | very depressing | 15:27 |
SmokingRope | is that something they can treat? | 15:30 |
SmokingRope | i've bruised my ribs a couple times and they just told me go easy on them for a couple weeks | 15:31 |
Ohm | anti-inflammatory pills and rest is pretty much it | 15:38 |
*** andrei has quit IRC | 15:53 | |
*** vi1985 has joined #tp | 16:27 | |
vi1985 | JLP, i really didn't enjoy the last IRC session... I don't think this kind of thing should be encouraged. | 16:37 |
JLP | vi1985: well we don't encourage it but discussions like this are nothing unusual in open source | 16:38 |
Iwanowitch | Wait, what happened? If I can know? | 16:39 |
vi1985 | JLP, i understand the value of it from your perspective. | 16:39 |
JLP | vi1985: so as long as it is only talk about the technology and civilized as this one was there should ne nothing wrong | 16:39 |
JLP | vi1985: but it is always up to each person if he wants to take part ot not | 16:40 |
vi1985 | Iwanowitch, basically someone challenged me to a "poke holes in each other's design" session. | 16:40 |
Iwanowitch | JLP: I updated my proposal on the wiki, btw | 16:40 |
JLP | Iwanowitch: will take a look a bit later | 16:42 |
vi1985 | JLP, i didn't have a problem with his questions... I just thought that the design proposal should speak for itself. | 16:42 |
Iwanowitch | vi1985: Ah, I see. Well, yeah, it depends on how and why it is done, I suppose... Constructive criticism is good, nasty things should be avoided, makes sense. | 16:42 |
Iwanowitch | JLP: thanks | 16:42 |
vi1985 | ok, I just didn't know it was common practice in open source :) | 16:43 |
JLP | vi1985: it's one of the experiences of open source development that programs like gsoc try to show new people | 16:48 |
JLP | vi1985: i know it does feel kinda hard and unpleasant at first, but you get used to it | 16:48 |
JLP | vi1985: i know when i experienced my first session like this with KDE developers | 16:49 |
*** zzorn has joined #tp | 16:50 | |
vi1985 | JLP, i understand... especially when there is competition over a position. Me and my friends do it all the time, but of course it would be different here. | 16:52 |
Iwanowitch | vi1985: Wait until you see code reviews from complete strangers... That feels weird. | 16:53 |
Iwanowitch | I'm not sure TP has much of a code review policy though? | 16:54 |
vi1985 | Iwanowitch, :) I see you've been around the block? | 16:54 |
Iwanowitch | I did soc last year, been meddling a bit since then. | 16:54 |
Iwanowitch | I don't call myself experienced though. | 16:55 |
karol | vi1985: imho I was one of the most civilised discussion i witnessed | 16:55 |
karol | sry, it was ... | 16:56 |
vi1985 | karol, lol :) i'll take your word for it ;) | 16:56 |
*** DTRemenak has quit IRC | 16:56 | |
JLP | Iwanowitch: yeah we don't have a policy, but there should definitely be more code and perr review | 16:57 |
JLP | for the better of the code itself :) | 16:57 |
vi1985 | i guess here you also need to put your mouth where your money is! ;) | 16:57 |
Iwanowitch | JLP: Of course, you need people and time for it. | 16:57 |
karol | although, from mathematics' point of view I have to agree with andrei :-/ | 16:57 |
vi1985 | karol, what exactly are you referring to? | 16:58 |
karol | category modeling is kind of dificult | 16:58 |
karol | vi1985: the GA | 16:58 |
* JLP can't agree with anyone since he know nothing about AI, but it was interesting reading it | 16:59 | |
*** jphr has quit IRC | 17:00 | |
vi1985 | karol, I don't think that andrei got to the bottom of it. He was interested to see how an algorithm can compare b/w individual elements of "hypotheses", whereas it is not what it does at all. | 17:00 |
Iwanowitch | Mmm. With proper representation, genetic algorithms can be very powerful for most things. But it's often hard to find a good representation and associated mutation/recombination operators. | 17:01 |
vi1985 | Yes, that | 17:01 |
vi1985 | *sorry, will continue | 17:01 |
*** zzorn is now known as zzorn_food | 17:02 | |
* JLP would just take in both AI applications if there were enough slots for GSoC and then we could see the battle of both AIs :) | 17:03 | |
vi1985 | Yes, that is true. The thing is that representation and decoding is actually not a big issue here; what I was surprised he didn't mention, is the difficulty in simulating it. (which will be the hardest part in coding it) | 17:04 |
karol | vi1985: I am looking on it as mathematical model, I learned that categories are better expressed by another structure or better there should be used data compresion algorithms or you should look for corelation ... -> it's rather from my experience with logistic regression and neural networks ... | 17:04 |
JLP | damn i can already see it will be even harder this year to choose only a few applications because of limited number of slots | 17:04 |
karol | JLP: bye bye persistnece module :((( | 17:05 |
vi1985 | karol, it all really depends, as Iwanowitch said, on the selection mechanism for passing to the next generation. He (andrei) maintained that I need to compare segments of "hypotheses", while I don't think it's necessary at all. But I don't want to discuss it for fear of putting words in his mouth! | 17:06 |
JLP | karol: i wouldn't say that just yet, the server could use that a lot, since it is still not very stable | 17:06 |
* karol should'n typewhile eating | 17:07 | |
* vi1985 should start concentrating on getting his AVL trees balanced... or else he will get a C | 17:08 | |
karol | JLP: btw. when does llnz usually arrive here? :) | 17:09 |
JLP | karol: uf hard to say, its 10:13 at his place now | 17:09 |
karol | JLP: pm? | 17:10 |
JLP | karol: am | 17:11 |
karol | hmm, 12hours before me :) | 17:12 |
JLP | karol: yeah same here, he's in new zeland, mithro has 7:47 on his clock and is somwhere in australia | 17:13 |
karol | JLP: you're from slovenia, am I right? | 17:14 |
* danderson wonders how many clocks are needed to keep track of all TP developers :) | 17:14 | |
JLP | i guess they should be here any hour now, if there's no sun outside or if they don't sleep too long :) | 17:14 |
JLP | karol: yup, slovenia | 17:14 |
JLP | danderson: one is enough, you just need to know all the offsets :) | 17:16 |
danderson | heh | 17:16 |
karol | JLP: never heard your language, but I would love to compare it to slovak, my mothertongue | 17:16 |
karol | live of course :) | 17:17 |
JLP | karol: hehe, well it looks like if we talk to czechs we understand each uther almost fine, if it is any similar | 17:17 |
JLP | karol: sometimes it is easier then using english :) | 17:18 |
karol | JLP: czech is quite similar (I study in Prague), polish is a bit harder to understand (my home is near the border) | 17:19 |
karol | JLP: it would be quite a talk to summon people from 4 countries :) | 17:20 |
JLP | karol: yeah, it should be fun | 17:20 |
JLP | karol: i only know that while otrok in our language means baby/kid it means slave in czech | 17:21 |
JLP | karol: so it is quite funny to see cars on the road that say slave on board :) | 17:21 |
karol | karol: in slovak too, our two languages are very similar | 17:22 |
karol | lol | 17:22 |
karol | I write to myself ;) | 17:22 |
* JLP is to tired, goes sleeping | 17:51 | |
JLP | good night all | 17:51 |
vi1985 | take care | 17:51 |
Iwanowitch | Night. | 17:51 |
karol | good night all | 18:06 |
*** karol has quit IRC | 18:07 | |
*** andrei has joined #tp | 18:51 | |
*** vi1985 has quit IRC | 18:52 | |
*** greywhind has joined #tp | 19:00 | |
*** zzorn_food is now known as zzorn | 19:18 | |
*** greywhind_ has joined #tp | 19:20 | |
*** greywhind has quit IRC | 19:20 | |
*** ezod has quit IRC | 19:56 | |
*** xdotx has joined #tp | 20:01 | |
*** bddebian has joined #tp | 20:24 | |
bddebian | Howdy | 20:26 |
*** zzorn is now known as zzorn_zzz | 20:46 | |
SmokingRope | hi bddebian | 20:47 |
bddebian | Hello SmokingRope | 20:52 |
*** Iwanowitch has quit IRC | 21:17 | |
*** greywhind has joined #tp | 21:20 | |
*** greywhind_ has quit IRC | 21:20 | |
*** Lukstr has joined #tp | 21:22 | |
*** mithro has joined #tp | 21:48 | |
mithro | morning people | 22:06 |
mithro | xdotx: ping? | 22:06 |
* xdotx hides | 22:06 | |
xdotx | mithro: hate to say it but i don't think i'm going to be able to do it today. | 22:07 |
mithro | :( | 22:07 |
mithro | any particular reason? | 22:07 |
xdotx | was up late last night working on my graphics project and i'm still not done - it's due tomorrow | 22:08 |
xdotx | also i'm pretty sure we won't get very far testing because of some known issues with the new object view system | 22:08 |
xdotx | i think llnz knows more about what's probably going wrong than me. i've got some clues, but i've concluded it's a non-trivial solution.. for me at least | 22:09 |
xdotx | i'm thinking it's got something to do with object views not being created for objects you don't own, and thus those objects appear not to be updated | 22:10 |
xdotx | er, to clarify it's with how RFTS is using the object view system, not the system itself | 22:10 |
xdotx | i think that one bug is producing a number of different strange problems though | 22:12 |
SmokingRope | is there a known issue with colonization? | 22:16 |
SmokingRope | it seems like every time i try to colonize the server crashes | 22:16 |
xdotx | x.x | 22:16 |
SmokingRope | i've reproduced a number of times now | 22:16 |
SmokingRope | s/d a/d it a/ | 22:17 |
mithro | SmokingRope: under RFTS? | 22:17 |
mithro | or Minisec? | 22:17 |
SmokingRope | on demo1 and linz's server most recently | 22:17 |
SmokingRope | earlier today | 22:17 |
mithro | hrm... you'll have to poke llnz | 22:17 |
SmokingRope | llnz: ping? | 22:17 |
mithro | he isn't on at the moment :) | 22:18 |
SmokingRope | lol | 22:18 |
SmokingRope | i guess it's not a server crash, but the client locks up on the 'downloading orders' step of a universe update | 22:25 |
mithro | SmokingRope: oh | 22:39 |
mithro | are you running from git? | 22:39 |
*** Lukstr has quit IRC | 22:42 | |
*** Lukstr has joined #tp | 22:48 | |
SmokingRope | mithro: nope, just the source forge version | 22:52 |
mithro | SmokingRope: ahh, known problem | 22:53 |
mithro | I'm pondering releasing a 0.3.1.1 | 22:53 |
mithro | which fixes that bug | 22:53 |
SmokingRope | i got it all working from git, the installer is nice though | 23:39 |
*** DTRemenak has joined #tp | 23:45 | |
*** llnz has joined #tp | 23:46 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!