*** tpb has joined #vtr-dev | 00:00 | |
*** digshadow has quit IRC | 04:51 | |
*** digshadow has joined #vtr-dev | 05:11 | |
*** digshadow has quit IRC | 06:25 | |
*** digshadow has joined #vtr-dev | 06:30 | |
*** digshadow has quit IRC | 12:53 | |
*** digshadow has joined #vtr-dev | 13:02 | |
kem_ | mithro: a) If you're getting mismatches with the .net file, I'd suggest looking through the code that writes out the .net file. The relevant part seems to be: https://github.com/verilog-to-routing/vtr-verilog-to-routing/blob/master/vpr/src/pack/output_clustering.cpp#L139-L181 | 16:04 |
---|---|---|
tpb | Title: vtr-verilog-to-routing/output_clustering.cpp at master · verilog-to-routing/vtr-verilog-to-routing · GitHub (at github.com) | 16:04 |
kem_ | mithro: b) The current master code supports a single linear chain, where each pack pattern edge has only a fan-out of 1 | 16:06 |
kem_ | mithro: If the carry chain matches that, then things should work (which is what the VTR architectures do) | 16:07 |
kem_ | mithro: If I recall correctly the issue on the ice40 is that there is more flexibility on the carry-chain (i.e. muxes to/from general inputs) which confuse the packer's router since it doesn't understand that some of the connections have more limited flexibility outside the block (i.e. Fc=0 for the block-level CIN/COUT pins) | 16:09 |
kem_ | mithro: Now it is possible to tweak the intra-block RR graph to make the packer router 'understand' those constraints, and I did put that code into the generalized_pack_patterns branch | 16:10 |
kem_ | mithro: But that uncovers another problem, it is now possible for the input BLIF file to have a connection, for example on ICE40, where a LUT has the carry signal as an input. This is legal in the architecture (since it has the mux), but would require multi-fanout pack patterns. | 16:13 |
kem_ | mithro: Otherwise the packer may put the LUT in a different logic block, and there would be no way to route the carry signal to that LUT (since it's part of the dedicated carry chain). | 16:14 |
kem_ | mithro: The rest of the work on the generalized_pack_patterns branch was trying to add support for multi-fanout pack patterns, which turns out to be much more involved... | 16:15 |
mithro | For (a) I've been looking at the cluster output code but I'm having trouble understanding exactly how the internal interconnections (IE the bits inside an <interconnect> block) are marked as being used | 16:18 |
kem_ | mithro: I'm not sure I quite follow then? There should be edges in the intra-block routing corresponding to the switches, and if those edges are used by a net that should mean that the corresponding switch is turned on? | 16:22 |
mithro | I'll be at a computer shortly and then I'll link you | 16:26 |
mithro | Okay, at a computer now | 16:31 |
mithro | So, the t_pb_route object contains the internal routing that is being used inside a tile? | 16:33 |
mithro | The t_pb_route on the t_pb object is a flat array -- t_pb_route is an array of size [t_pb->pb_graph_node->total_pb_pins] ? | 16:35 |
mithro | The t_pb_route objects have a driver_pin_id (which is an int) and a list of sink_pb_pin_ids which is a vector of ints and a pb_graph_pin | 16:36 |
mithro | The pb_graph_pin is a pointer to a part of the netlist | 16:38 |
mithro | However I think the part I don't quite understand is that a t_pb is a hierarchy of the pb_type objects -- how do I convert between t_pb objects and the pin_ids? | 16:40 |
mithro | The t_pb_route objects don't have any references to t_pb objects | 16:42 |
mithro | kem_: The conversion between the ints for the pin_ids and the t_pb objects I think is what is causing me issues | 16:46 |
kem_ | mithro: Yeah that stuff pretty confusing. | 16:51 |
kem_ | mithro: I have to re-figure it out each time I go into the cluster routing... which I try to avoid doing :) | 16:52 |
mithro | kem_: Good to know it isn't just me :-P | 16:53 |
kem_ | mithro: I think I wrapped that up in a look-up class: https://github.com/verilog-to-routing/vtr-verilog-to-routing/blob/cd6d0ac2ea931485a6c64101416d12c77c682c40/vpr/src/util/vpr_utils.h#L43-L61 | 16:53 |
tpb | Title: vtr-verilog-to-routing/vpr_utils.h at cd6d0ac2ea931485a6c64101416d12c77c682c40 · verilog-to-routing/vtr-verilog-to-routing · GitHub (at github.com) | 16:53 |
mithro | kem_: Is there an example of using that somewhere? | 16:54 |
kem_ | mithro: Let me find one... | 16:54 |
kem_ | mithro: https://github.com/verilog-to-routing/vtr-verilog-to-routing/blob/master/vpr/src/timing/clb_delay_calc.inl#L81 | 16:56 |
tpb | Title: vtr-verilog-to-routing/clb_delay_calc.inl at master · verilog-to-routing/vtr-verilog-to-routing · GitHub (at github.com) | 16:56 |
kem_ | mithro: It's used in a couple of different places, but that's one I'm somewhat more familiar with | 16:57 |
kem_ | mithro: It's the delay calculation code which traces the intra-block routing adding up the annotated delays for timing analysis | 16:58 |
mithro | kem_: Okay, I think I see how to walk the t_pb_route and get the pins -- the question is how do these pins relate to the <interconnect> definitions? | 17:01 |
mithro | pin->output_edges[oedge]->interconnect | 17:02 |
mithro | kem_: I think this diagram explains the carry chain issue -> https://docs.google.com/drawings/d/1qREImoaUjWDSsnbimDu-Mig3_M9hTr-6q-zbPd2VPEM/edit | 17:03 |
tpb | Title: Verilog to Routing (VtR / VPR) - Carry Chain + Pack Patterns - Google Drawings (at docs.google.com) | 17:03 |
kem_ | mithro: Yep, from the edge you should be able to get back to the interconnect | 17:05 |
kem_ | mithro: Your diagrams make sense. The blue connections are problematic because they have fanout > 1 which isn't supported in master | 17:07 |
mithro | kem_: I added a bigger example of closer to the real situation | 17:19 |
mithro | kem_: The part in purple is the really hard to deal with bit -- it's where the only way to get the the carry output onto the fabric is via using the tile *above* the current one | 17:22 |
mithro | kem_: Which leads me to another question, I want to create the architectures in this diagram -- how do I make odin-ii produce a blif file for this architecture? | 17:27 |
kem_ | mithro: So the top two figures basically show what VPR currently supports, a single linear carry chain with fanout = 1 on all edges | 17:29 |
kem_ | mithro: The middle two show a carry chain with fanout > 1 (not supported on master) | 17:29 |
kem_ | mithro: The bottom expands the fanout > 1 case across blocks | 17:30 |
mithro | kem_: I was thinking I could convert the middle type into the top type at the mapping phase in yosys | 17:30 |
kem_ | mithro: Possibly, I guess it depends what the blue connections connect too, and whether they use a dedicated connection internal to the block, or can reach the inter-block routing | 17:33 |
kem_ | mithro: Can you clarify what the pink/purple arrow in the bottom figure means? | 17:33 |
mithro | kem_: The pathway the signal needs to take to get onto the fabric | 17:34 |
kem_ | mithro: OK that makes sense. So the signal path is through the carry block, across the carry block and into a LUT. | 17:35 |
kem_ | mithro: There is probably one more possibility not shown in the diagram. The start/end blocks are marked as CARRYs (dashed). They could actually be any primitive type. | 17:36 |
kem_ | mithro: That is you could, for example, have a LUT driving the carry chains CIN or sinking the COUT | 17:39 |
kem_ | mithro: Essentially, what is needed is support for pack patterns which are trees, instead of strict chains | 17:47 |
kem_ | mithro: The existing code is tightly coupled to the chain concept and doesn't do a very good job of isolating the idea of the pattern, from the detection and implementation code | 17:48 |
kem_ | mithro: In the generalized_pack_patterns branch I tried to isolate things better | 17:48 |
kem_ | mithro: The first thing it does is build a model of the pack pattern | 17:50 |
kem_ | ICE40 style pack pattern graph https://usercontent.irccloud-cdn.com/file/ErfWUPqa/multifanout_packpattern.png | 17:51 |
kem_ | mithro: In the image it's assuming a block with only two adders | 17:52 |
kem_ | mithro: '*' indicates a wildcard block (could be another adder, or LUT/FF etc) | 17:52 |
kem_ | mithro: and the dashed edges indicate that matching that connection is optional | 17:53 |
kem_ | mithro: It then goes through and finds sub-graphs of the netlist which match the pattern | 17:55 |
mithro | kem_: Okay, I'm afraid you lost me with that graph diagram | 17:56 |
kem_ | mithro: Essentially the graph describes a pattern which is then matched against the input netlist | 17:57 |
kem_ | mithro: So #0 can be any block (wildcard) | 17:58 |
kem_ | mithro: So #1 can optionally be driven by that block to #1's cin pin | 17:58 |
kem_ | mithro: A candidate adder would become part of the pattern if it's cin was driven by #1's cout | 17:59 |
kem_ | mithro: and so on | 17:59 |
mithro | kem_: You mean the candidate block would become part of a molecule because of the pack pattern? | 17:59 |
kem_ | mithro: Correct | 18:00 |
kem_ | mithro: The matching code will greedily try to find the largest matches in the netlist | 18:02 |
mithro | kem_: So looking at my diagrams -- on Figure - X | 18:04 |
kem_ | mithro: So if an adder had matched as #1, and it had a .names/LUT with a connection from #1's cout to the LUTs 4th input that LUT would be added to the molecule as #4 | 18:04 |
mithro | Do I need to mark the connections that go from BLOCK->CARRY with a pack_pattern property? | 18:04 |
kem_ | mithro: It depends | 18:05 |
*** litghost has quit IRC | 18:05 | |
*** litghost has joined #vtr-dev | 18:05 | |
kem_ | mithro: If the signals from the BLOCKs can reach the inter-block routing fabric, no | 18:05 |
kem_ | mithro: Effectively, if there are ways to get the BLOCK -> CARRY and CARRY -> BLOCK signals from the general fabric (which depends on the architecture) then, no | 18:07 |
kem_ | mithro: The packer would detect and place the CARRY blocks as a chain | 18:07 |
kem_ | mithro: And the other blocks would be packed as normal (since their signals can get to/from the inter-block routing) | 18:08 |
mithro | kem_: The orange arrows can't reach the fabric | 18:08 |
kem_ | mithro: Then they need to be part of the pack pattern | 18:08 |
kem_ | mithro: Otherwise the packer could put them in different blocks which would be unroutable | 18:09 |
mithro | kem_: So Figure - D1 explains this? | 18:22 |
mithro | kem_: Basically any time a path can't be connected to the general fabric, you need a pack_pattern? | 18:23 |
mithro | kem_: And a special case is when the connection is to a pin on the tile which can't be connected to the fabric? | 18:24 |
kem_ | mithro: Effectively yes, I believe that is the case. | 18:24 |
kem_ | mithro: The packer otherwise assumes that it can get a primitives signals to/from the inter-block routing. If it can't then it needs to be told to keep them together. | 18:25 |
mithro | kem_: And I think that explains the error messages that I've been getting about the fact that an pin of an internal block is unable to be routed to a top level pin... | 18:26 |
kem_ | mithro: That would make sense | 18:26 |
kem_ | mithro: It is probably possible to infer some of the pack patterns just from the architecture description, but no one has done that yet. And there are probably subtle/complex cases where you still want to manually define them | 18:27 |
mithro | kem_: It actually there already seems to be some type inferring in the code? There is a talk about "forced connections" a bit? | 18:27 |
kem_ | mithro: If I recall correctly that is something different, more related to the idea of an optional connection on a pattern | 18:30 |
mithro | kem_: How does it know when a pack pattern is optional verse required? | 18:30 |
kem_ | mithro: I think the code in master treats chains as optional and everything as as required | 18:30 |
kem_ | mithro: The optional is because you may have a chain in the netlist with fewer bits (e.g. 6 bits) but your architecture has more bits per logic block (e.g. 10 bits). Hence extending the chain is optional. | 18:31 |
mithro | kem_: That requires you to be able to get a signal in/out of the carry chain CO/CIN? | 18:32 |
kem_ | mithro: Potentially, often the chain's COUT is unused and left disconnected | 18:34 |
mithro | kem_: Ahh | 18:35 |
mithro | kem_: So, it feels like the quickest solution to make things work might be to convert Figure - B2 style carry output to Figure - A2 style in tech mapping in Yosys | 18:36 |
kem_ | mithro: Yes that seems like the quickest way | 18:38 |
mithro | kem_: Then we can loop back to fixing up support in VTR | 18:38 |
kem_ | mithro: Makes sense | 18:38 |
mithro | kem_: BTW Do you have a way of taking the .net output and converting it into something more "human readable" ? | 18:39 |
kem_ | mithro: Nope :( | 18:41 |
mithro | kem_: At some point I'll get frustrated enough and write a little Python script :-P | 18:41 |
mithro | kem_: Okay, I'm going to head to lunch and then give another try at the t_pb_route and interconnect stuff afterwards | 19:03 |
Generated by irclog2html.py 2.13.1 by Marius Gedminas - find it at mg.pov.lt!