[02:58] Join: asw joined #corewars [05:10] Join: fiveop joined #corewars [06:06] MSG: Read error: Connection reset by peer [06:51] Join: CL joined #corewars [07:19] MSG: Remote host closed the connection [07:58] Join: CL joined #corewars [09:43] Part: CL left #corewars [12:45] Join: Core29 joined #corewars [13:30] MSG: Quit: Trillian (http://www.ceruleanstudios.com [14:58] Join: Fluffy joined #corewars [14:58] :) [15:26] I've got a little dilemma :-( [15:27] Should I make PyCorewar behave exactly like the '94 standard specifies or like pmars behaves. [15:27] Options to do both? [15:27] hmm [15:27] How about updating pMARS? [15:27] It already has several bugs [15:28] It is all about the evaluation of the addressing modes. [15:29] say ... add.f { x, # y [15:29] pmars stores the values of x and y in its internal registers [15:30] hmm ... bad examples [15:30] let's change to "add.f < x, # <" [15:30] "add.f < x, # y" [15:31] when evaluating "< x", "y" might be decrements [15:31] but since "y" is already stored in the internal registers [15:31] the original "y" is used for the addition, but not "y-1" as the standard specifies [15:33] Any comments on what is more easier to understand? [15:43] Could you write all this down in an e-mail for me, along with some examples? I _may_ try and fix up pMARS at some point (although make no promises...) [15:44] no problem [15:44] what's you mail address? [15:44] *your [15:47] [15:48] I'm already at the point to write a really extensive testsuite for the '94-standard [15:48] one, which checks *every* possible instruction [15:51] Unfortunately there are so many possible instructions, that the tests have to be created by some kind of script [15:51] which still make is difficult to verify, that the test itself is correct [16:02] True. But at least that way you can find regressions, and check different implementations against each other. Difference may well indicate bugs. [16:04] I think, I will try to write such a testsuite. [16:05] As soon as I've finished writing your mail, I'll post about it at r.g.c. [16:05] Sounds good to me. I know I wrote one for my Z80 emulator and found it useful, even though the results were generated by the emulator itself. [16:07] At the moment I'm thinking about lots of really short programs, which tie, if they executed correctly and die, if not [16:07] That way you can see from the results, that it has worked (or not) [16:07] just like the validate.red for '88 [16:08] If I restrict the programs to (say) tiny settings with max. 200 cycles, it shouldn't take much time to run all the tests [16:08] Join: John_ joined #corewars [16:09] Nick Change: John_ changed nick to John [16:09] Hi John :) [16:09] Oh [16:09] How did I get here? [16:09] Hi Fluffy :-) [16:10] John: Do I really need to tell you about the human reproduction cycle? ;-) [16:10] Hmmm... not quite what I meant [16:14] John: Where did you find the 3.3 version of the draft? [16:14] It's an orphan file on Planar's site [16:15] Time to write a 3.4 version :) [16:15] :-) [16:21] ok, I've sent the email and posted to rgc about the testsuite [16:23] ? [16:25] If I'm correct pMARS behaves differently from the '94 draft [16:25] I intend to write a little testsuite, which checks the behavious of every instruction [16:26] that way it should be easier to verify MARS implementations [16:37] 19 * 7 * 8 * 8 = 8512 test + a couple of tests, that division by zero kills the current process + correct processes management [16:39] testing, that division by zero kills the current processes, needs 2 * 7 * 8 * 8 = 896 tests [16:39] hmm ... that should make about 10000 different programs [16:39] quite a task [16:40] anybody want to help to check the programs? ;-) [16:41] Can't you fit more than one test in each program? [16:41] I could (like validated red), but that would make it harder to check the tests itself [16:42] and lots of tests would be the same [16:42] for instance spl.? behaves all the save for every "?" [16:42] or dat.?, nop.? [16:43] spl should behave the same for every modifier. [16:43] *save->same [16:43] But you have to test each one, that's the idea of testing ;-) [16:44] Unfortunately we cannot test the evaluatin of the addressing modes separately, because it can be evaluated for every instruction differently (look at the way, fmars does it) [16:44] *evaluation [16:45] but adding additional 8 * 8 shouldn't hurt ;-) [16:45] and that could be a start [16:46] Isn't it easier to validate it by checking through your code with a fine toothcomb? :-P [16:47] John: And how would you check, that fmars behaves correctly? [16:47] and PyCorewar already uses lots of fmars' features [16:47] and will use even more [16:48] Gotta go, someone waiting! [16:48] I'll be lurking! [16:48] * Fluffy waves [16:52] Join: Core29 joined #corewars [16:53] Hi Core29 [17:07] hmm ... white I'm working on the testsuite I can continue to work on the Tiny Dodo [17:10] Join: Fluffy_ joined #corewars [17:10] MSG: Quit: dat.f < 1, # 0 [17:13] Nick Change: Fluffy_ changed nick to Fluffy [17:19] MSG: Quit: Trillian (http://www.ceruleanstudios.com [18:21] Join: sascha joined #corewars [18:22] Hi sascha [18:22] Hi Jens [18:31] MSG: [18:32] MSG: Ping timeout: 252 seconds [18:34] Join: John_ joined #corewars [18:34] Nick Change: John_ changed nick to John [18:34] :) [18:43] John: what's your opinioin? should we adhere to the standard or to the way pmars does it? [19:00] * Fluffy waves [19:00] MSG: Quit: dat.f < 1, # 1 [19:02] MSG: Ping timeout: 252 seconds [19:05] Join: John_ joined #corewars [19:05] Nick Change: John_ changed nick to John [19:14] MSG: Ping timeout: 252 seconds [19:15] Join: John_ joined #corewars [19:15] Nick Change: John_ changed nick to John [19:25] * sascha waves [19:25] Part: sascha left #corewars [19:46] MSG: Ping timeout: 252 seconds [19:51] Join: John_ joined #corewars [19:51] Nick Change: John_ changed nick to John [20:01] Join: John_ joined #corewars [20:02] MSG: Ping timeout: 252 seconds [20:02] Nick Change: John_ changed nick to John [20:07] MSG: Ping timeout: 252 seconds [21:06] Join: CL joined #corewars [21:19] Join: Roy joined #corewars [21:23] MSG: Client Quit [22:55] MSG: Quit: humhum