Wednesday, 2019-02-20

*** tpb has joined #symbiflow00:00
mithrolitghost: Great00:26
litghostConfirmed, at a minimum 16-bit BRAM's pass the simple RAM tester!  Last step is to add support for initial BRAM contents, and 16-bit BRAM's are ready.00:30
mithroFYI - My house is literally getting packed into boxes right now, so probably won't be around much :-P00:51
mithrolitghost: Should I kick off a rebuild now? Has everything needed be merged00:54
litghostGo ahead and kick off a rebuild00:56
mithrolitghost: Did you get a chance to chat with kgugala and acomodi about the packing issues they were having?01:07
litghostmithro: Ya, your comment above is relevant.  Thinking about splitting CLBLL into two SLICE_L's.  There are some specific assumptions by the placer that may be placated by that route.  In parallel, they are going to continue to hammer at VPR to see if we can solve the issue directly01:08
mithrolitghost: I think we should probably have a sync up around packer stuff once I'm back in the US01:11
mithrolitghost: Are there things actually shared between slices in a tile?01:12
litghostFor a CLB, no01:13
litghostFor a BRAM, yes01:13
litghostFor a FIFO, unclear, etc, etc01:13
mithrolitghost: I was pondering if we convert slices in CLBs into VPR tiles?02:10
mithrothe description to VPR is a /representation/ of the hardware - it doesn't need too (and probably shouldn't) the Xilinx representation, nor how the actual hardware is exactly....02:12
*** citypw has joined #symbiflow02:44
litghostYou mean a slice?02:45
mithrolitghost: Yes, split a CLB into two tiles - one for each slice03:03
litghostmithro: That's what I suggested to kgugala / acomodi03:10
mithrolitghost: Ha, great minds think alike - so do terrible ones, guess it's hard to know which is which :-P03:11
mithroGah, I forgot to update before doing the rebuild...03:37
*** jevinskie has joined #symbiflow04:04
litghost'04:10
*** _whitelogger has quit IRC04:22
*** _whitelogger has joined #symbiflow04:24
mithrolitghost: '05:15
mithrounbalanced quotes make me uneasy :-P05:15
*** jevinski_ has joined #symbiflow05:29
*** jevinskie has quit IRC05:31
*** jevinski_ has quit IRC06:12
*** jevinskie has joined #symbiflow06:13
*** OmniMancer has joined #symbiflow06:53
*** symbiflow-slack has joined #symbiflow07:40
*** tgielda has joined #symbiflow07:42
*** pgielda_ has joined #symbiflow07:53
pgielda_slack gateway should be working now07:56
symbiflow-slack<pgielda> symbiflow.slack.com07:56
symbiflow-slack<pgielda> there is a channel there called symbiflow, that is synced both ways with the irc07:56
symbiflow-slack<pgielda> symbiflow-slack is a proxy user that forwards messages both ways07:57
mithropgielda_: Great08:05
symbiflow-slack<me1> Testing?08:06
symbiflow-slack<kgugala> Looks fine08:07
mithropgielda_: Any idea why it just said I was <me1> ? :-P08:07
mithroMaybe should shorten the nick to something like sf-slack?08:11
mkurctest08:27
symbiflow-slack<mkurc> test08:28
symbiflow-slack<mkurc> :thumbsup:08:28
symbiflow-slack<pgielda> I am pretty sure we can tweak this all08:31
symbiflow-slack<pgielda> @mithro me1 is probably because on slack "me" was taken (or maybe to short?) and your email is me@domain08:34
symbiflow-slack<pgielda> I am guessing here of course08:34
symbiflow-slack<pgielda> so your real username on slack is me1 apparently ;)08:34
symbiflow-slack<pgielda> This can definitely be fixed though as this is something our bridge adds08:34
*** symbiflow-slack has quit IRC08:43
*** sf-slack has joined #symbiflow08:43
sf-slack<pgielda> here we go, sf-slack it is08:44
*** citypw has quit IRC10:05
sf-slack<mkurc> Do we support RAM128X1D in VPR ? I'm trying to pack (just pack) the Picosoc test and it fails saying that "Message: Can not find any logic block that can implement molecule. Pattern DRAM128_DP soc.memory.mem.28.1.0.f7a_mux". When I remove 128bit RAMs and use only 32 and 64 bit wide ones the pack succeeds. I can see that the techmaps converts them to SPRAM128+DPRAM128+DRAM_2_OUTPUT_STUB. Haven't checked the arch XML file11:56
sf-slackyet.11:56
sf-slack<acomodi> I am dealing with something similar for the `SLICEMs` issue. I have found out that by modifying the DRAM xml definitions in `xc7/primitives/slicem/` the SLICEM issue is solved, but, by trying to route the `xc7/tests/dram/128x1d it fails with the following message12:02
sf-slack<acomodi> `No possible routing path from cluster-external source (LB_SOURCE) to cluster-internal sink (LB_SINK accessible via architecture pins: BEL_BB-DRAM_2_OUTPUT_STUB[0].DPO[0]): needed for net ram0.DPO_FORCE' from net pin 'ram0.f7a_mux.O' to net pin 'ram0.stub.DPO'`12:02
sf-slack<acomodi> I would assume there is some bug in the dram arch definition12:03
sf-slack<mkurc> ok, I'll look into that12:08
*** OmniMancer has quit IRC12:39
*** mkurc has quit IRC13:30
sf-slack<acomodi> `slicem` issue update: I have been checking the xml definition of slicem and I have noticed that d_drams are not produced by the templates as it happens for dram_a/b/c.14:16
sf-slack<acomodi> in `vpr` I got the following error message: `Differing modes for block.  Got LUTs previously and DRAMs for interconnect DO6.` It was related to pin DO6 which most probably had to do with dram_d. I was suspicious about the fact that only for the pin DO6 I received the `mode` error, so I decided to uniform the dram_d to the other ones (a, b, c) by changing the `slicem.pb_type.xml` in `xc7/primitives/slicem`14:19
sf-slack<acomodi> By modifying the xml definition the `chain_packing` test with 5 counters passed (including `slicems`) and we got a `top.bit` with all the leds blinking (also the one using the `slicem`)14:20
sf-slack<kgugala> +114:21
sf-slack<acomodi> I am still testing if this was actually the issue, and I have run all the DRAM tests in the `xc7/tests/dram` directory and got the following results14:21
sf-slack<acomodi> (before the slicem.pb_type.xml changes)14:23
sf-slack<acomodi> ``` 1_256x1s: not passing 1_128x1d: not passing 2_32x1d: not passing 2_64x1d: passing 2_128x1s: passing 4_32x1s: passing 4_32x2s: passing 4_64x1s: passing ```14:25
sf-slack<acomodi> I am currently testing all of them adopting my solution to get the slicems to work with the `chain_packing` test and redo all the dram tests. I believe there are some pins which are not well defined in the xml definitions of the slicems14:27
litghost128x1d used to pass14:42
litghostmkurc: 128x1d should be supported, however the DLUT DRAM isn't templated because it is a special snow flake14:44
sf-slack<acomodi> I am using the `vpr` version from https://github.com/SymbiFlow/vtr-verilog-to-routing/pull/914:44
tpbTitle: [WIP] round robin packing by kgugala · Pull Request #9 · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)14:44
litghostacomodi: making D06 like the others will not work14:45
sf-slack<acomodi> I am not sure if this is the cause14:45
sf-slack<mkurc> @litghost Yes, the 128x1 is supported but it failed to pack for some reason when I tried the picosoc with my techmap.14:45
litghostacomodi: If you make DO6 like the other DRAMs it will no longer work in hardware14:47
sf-slack<acomodi> Actually I have produced a bitstream for the basys3 and all the 5 leds were working correctly (where the 5th LED is related to the slicem)14:48
sf-slack<kgugala> but that bitstream does not use brams14:48
*** citypw has joined #symbiflow14:49
sf-slack<acomodi> Yeah, probably that is why it works for the `chain_packing` test, but maybe it will fail for the `drams` one14:50
litghostThe DLUT structure is not templated because it must be14:52
*** test_user has joined #symbiflow14:54
litghostacomodi: what change did you make to that helped with chain issue, but broke the DRAM pack test?14:59
sf-slack<acomodi> So, without the changes I have obtained the results I have previously written. Anyways, the changes consist in the usage of the template also for the d_dram. Basically I changed the `CMakeLists` in `xc7/primitives/slicem/Ndram/` to include d_dram in using the templates15:01
sf-slack<acomodi> and then I have adapted the `slicem.pb_type.xml` to deal with the d_dram that is using the template15:02
*** test_user has quit IRC15:06
sf-slack<acomodi> I am running the `dram` tests once again with the master branch of `arch-defs` to see which fail and which don't. BTW all of the previous tests failed during the `cluster_routing` step when trying to `pack`15:08
sf-slack<acomodi> `dram` tests update: all the tests have passed using the current `arch-defs` master as well as the conda `vpr`15:29
litghostThat is pack only15:42
litghostPnR only15:42
litghostYou need to test on hardware15:42
litghostIt won't work15:43
sf-slack<acomodi> Ok, I'll try them on HW, but they are without my modifications in the slicem.pb_type.xml. I'll let you know in a bit15:45
*** jevinskie has quit IRC15:49
*** jevinskie has joined #symbiflow15:50
sf-slack<mkurc> Can you tell what is the most complex design that we have managed to implement using Yosys+VPR on 7-series ? I am concerned that if we move from a 4-bit counter to the PicoSoc in one step we might fail miserably...15:53
*** pgielda_ has quit IRC16:05
sf-slack<mkurc> Anyway, I'd start working on https://github.com/SymbiFlow/symbiflow-arch-defs/issues/36016:07
tpbTitle: Need BRAM simulation model · Issue #360 · SymbiFlow/symbiflow-arch-defs · GitHub (at github.com)16:07
*** pgielda_ has joined #symbiflow16:12
sf-slack<acomodi> Indeed the dram tests do not work on HW (by input switches do not seem to produce the supposed effects)16:25
litghostThe DLUT was setup that way for a reason16:30
sf-slack<acomodi> To be more precise though, I have tested the master, no changes in the slicem definitions neither in the DLUT16:32
*** jevinskie has quit IRC16:34
litghostDid you regenerate the harness?16:44
sf-slack<acomodi> Right, I forgot about that16:46
litghostmkurc: Can you open an issue with your DRAM128X1D issue?  We have a PnR test that appears to be passing, so I'm suprised by the failure17:19
litghostmkurc: I re-tested master, and DRAM128X1D appears to be packing on master with master+wip VPR.  In the issue, include which VPR you are using17:22
*** tgielda has quit IRC17:52
litghostmithro: https://github.com/SymbiFlow/prjxray-db/pull/419:37
sf-slack<mkurc> @litghost I'd rather not open an issue for DRAM128X1D yet. Because the tests are passing and the DRAM packs. I've encountered a problem when trying to pack the whole picosoc which contained DRAM128X1Ds. I've probably stumbled upon the issue with incorrect packing of carry chains. I used VPR from master branch. I'will check it against @kgugala fix with round-robin packing tomorrow.20:20

Generated by irclog2html.py 2.13.1 by Marius Gedminas - find it at mg.pov.lt!