Wednesday, 2019-01-09

*** tpb has joined #symbiflow00:00
mithrodigshadow: So my database build failed on imuxlout01:25
*** nonlinear has joined #symbiflow01:51
*** _whitelogger has quit IRC02:21
*** _whitelogger has joined #symbiflow02:24
*** citypw has joined #symbiflow04:05
*** bitmorse_ has joined #symbiflow09:01
*** karol2 has joined #symbiflow09:33
*** karol2 is now known as kgugala09:33
nats`oky mithro got it :)09:54
*** citypw has quit IRC10:24
*** bitmorse_ has quit IRC10:32
*** bitmorse_ has joined #symbiflow10:34
*** bitmorse_ has quit IRC11:23
*** bitmorse_ has joined #symbiflow11:27
*** bitmorse_ has quit IRC11:34
*** bitmorse_ has joined #symbiflow11:34
*** bitmorse_ has quit IRC12:19
*** bitmorse__ has joined #symbiflow12:19
*** bitmorse__ has quit IRC12:48
*** bitmorse__ has joined #symbiflow13:06
*** bitmorse__ has quit IRC13:18
*** citypw has joined #symbiflow13:32
*** bitmorse__ has joined #symbiflow13:36
*** bitmorse__ has quit IRC13:48
*** bitmorse__ has joined #symbiflow14:05
*** bitmorse__ has quit IRC14:19
mithrodigshadow: I'm blocked on building a new database because of the imuxlout issue17:41
digshadowmithro: its not a new issue, but someone is actively looking at it right onw17:44
digshadownow17:44
mithroDo I just keep rerunning the fuzzer?17:45
kgugalaI'm looking at that17:45
kgugalaI think I'm getting closer, but still there are some issues that needs to be resolved17:46
nats`https://pastebin.com/hSM0bSkd22:07
tpbTitle: [TCL] 072 Memory Leak Split job attempt - Pastebin.com (at pastebin.com)22:07
nats`I'm trying something like that for the problem of the 072 fuzzer22:07
nats`for what I saw the problem comes from get_nodes function22:08
nats`my hope is that the temporary interpreter will end cleaned with everything inside22:08
nats`if not we may need to use sort of a bash script on top to call the tcl with good index value22:08
nats`and sorry for the crappy tcl but it was a long time I didn't write a line of that... language22:09
nats`we will soon have an answer but it may be possible that the main vivado interpreter doesn't clean everything like it should22:12
litghostnats: Does that lower the memory usage as expected?22:22
nats`litghost, uhhmm still have to wait22:27
nats`it doesn't fall after each interpreter delete but apparently GC couyld take care of that later when memory needed22:27
nats`but I think it'll not22:27
nats`let's see22:27
nats`in the worst case i'll split it at shell script level22:28
litghostHow are you measuring peak memory usage?  I've been using "/usr/bin/time -v <program>"22:28
nats`I'm only looking at top22:28
nats`is it enough ?22:28
nats`by the way the memory module of tcl is not available in the vivado shell22:28
nats`I guess they didn't compile with the good flag22:28
litghost:(22:28
nats`do you think my code is clean enough in tcl ?22:29
litghostusing /usr/bin/time provides a nice summary of memory usage and CPU time22:29
litghosttcl looks fine22:29
litghostWe should write a comment about why the extra complexity exists, but hold off until it's proven to work22:30
nats`sure22:30
nats`I made a lot of test22:30
nats`and it apperas that calling get_nodes no matter the way you do it eat memory until your close the vivado tcl shell22:30
litghostLikely vivado is opening a bunch of internal data structures, and doesn't close them22:31
nats`https://pastebin.com/0seQdiUJ22:31
tpbTitle: [TCL] get_nodes leak ? - Pastebin.com (at pastebin.com)22:31
nats`certainly22:31
litghostFor systems with plenty of memory, that is likely a good choice22:31
nats`int hat simple loop it still eat all the ram even with explicit unset22:31
nats`what is a lot ? :D22:31
litghostI have 128 GB, runs fine on a 50k part22:31
litghostHowever there are 100k and 200k and higher parts22:32
litghostAt some point even my system will fall over22:32
nats`sure22:32
nats`what is worrying me is that if it doesn't work with slave interpreter there are some huge problem in their tcl interpreter22:32
nats`because a slave interpreter is deleted with all his context22:32
nats`at least it should22:33
litghostThe get_nodes is their interface, there could actually be a non-tcl leak present in that interface22:33
nats`but if I'm not wrong GC implementation of tcl is free to manufacturer22:33
nats`uhhmmm good point !22:33
litghostWe are also using a fairly old Vivado version, so its possible this bug was already fixed in a newer version22:33
nats`something usual with C wrapper22:33
nats`I also found something interesting about old vivado22:34
nats`https://forums.xilinx.com/t5/Vivado-TCL-Community/Memory-Leak-in-Vivado-TCL/td-p/52547522:34
tpbTitle: Memory Leak in Vivado TCL - Community Forums3rd Party Header & Footer (at forums.xilinx.com)22:34
nats`they added a parameter to not pipe all puts through vivado core22:34
nats`should exploded soon22:38
nats`..22:38
nats`bang22:38
nats`it was auto killed by linux :D22:38
nats`Block: 1722:39
nats`StartI: 5844379 - StopI: 618816622:39
nats`inside interpreter 5844379 618816622:39
nats`WARNING: [Vivado 12-2548] 'get_pips' without an -of_object switch is potentially runtime- and memory-intensive. Please consider supplying an -of_object switch. You can also press CTRL-C from the command prompt or click cancel in the Vivado IDE at any point to interrupt the command.22:39
nats`get_pips: Time (s): cpu = 00:00:09 ; elapsed = 00:00:09 . Memory (MB): peak = 16285.488 ; gain = 510.996 ; free physical = 307 ; free virtual = 45922:39
nats` /home/nats/Xilinx/Vivado/2017.2/bin/loader: line 179: 10787 Killed                  "$RDI_PROG" "$@"22:39
nats`so I guess a good mitigating solution would be to make an overlay either with an other tcl script or with a bash script to start block processing with good index22:40
nats`bash script seems coherent with the build process we use in fuzzer22:40
nats`uhhmmm I can test with 2017.4 I have it installed !22:41
nats`nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/20122:43
nats`2016.4/ 2017.2/ 2017.4/22:43
nats`(env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/2017.4/settings64.sh22:43
nats`(env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ vivado -mode batch -source split_job.tcl22:43
nats`let's try22:43
nats`I may have a 2018 install too22:43
nats`I should present the bill for the occupied hard drive to xilinx :)22:43
litghostAnyways, I'm okay with a bash based approach22:45
litghostYou might want a two phase approach, where you identify the number of pips and then delegate to each vivado instance22:45
litghostMuch like your interpreter approach22:45
nats`yep :)22:45
nats`and I was thinking each block to a different file22:46
nats`downhill_index.txt22:46
nats`downhill_$index.txt22:46
litghostah sure, and then concat them22:46
nats`it can be easily merged after and would avoid to generate tons of multiGB text file on hardrive22:46
litghostFYI you could move ordered wires before https://github.com/SymbiFlow/prjxray/tree/master/fuzzers/073-get_counts and use that for the pip count22:47
tpbTitle: prjxray/fuzzers/073-get_counts at master · SymbiFlow/prjxray · GitHub (at github.com)22:47
nats`uhhmm I suggest I do things step by step because i'm really new in the project :)22:47
nats`I don't want to make mistake and break things :)22:47
litghostsure22:47
nats`by the way I just realized WARNING: [Vivado 12-2683] No nodes matched 'get_nodes -uphill -of_object INT_R_X1Y149/INT_R.WW4END_S0_0->>WW2BEG3'22:48
nats`could it be a problem when you try to get_nodes on a failed match ?22:48
litghostthat is fine, some pips are disconnected22:48
nats`you know usual not covered free on return ?22:48
nats`oky time to go to bed for me but i'll write the bash version tomorrow and test it before submitting it to Push request22:50
nats`and i'll so fix 072-074 fuzzer build22:50
nats`I already wrote it but can't test22:50
nats`good night22:51
litghostnats: Sounds good.  As you go, it would be good to fix https://github.com/SymbiFlow/prjxray/issues/17122:56
tpbTitle: All output products should go into "build" dir · Issue #171 · SymbiFlow/prjxray · GitHub (at github.com)22:56
litghostEspecially if you are adding more intermediates22:56
nats`litghost, that's what i'm fixing :)23:03
nats`that's because of that one i'm patching 072 because I have patch for 71-7423:04
nats`71 is merged but the other are dependant on 072 for testing23:04
nats`just efore going to bed I think I could start different interpreter in parallel for 072 so make a compromise to get more speed23:05
nats`talking tomorrow :)23:05
nats`good night23:05

Generated by irclog2html.py 2.13.1 by Marius Gedminas - find it at mg.pov.lt!