Tuesday, 2021-11-02

*** tpb <[email protected]> has joined #litex00:00
*** TMM_ <[email protected]> has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.)02:28
*** TMM_ <[email protected]> has joined #litex02:28
*** peeps[zen] <peeps[zen]!~peepsalot@openscad/peepsalot> has joined #litex02:50
*** peepsalot <peepsalot!~peepsalot@openscad/peepsalot> has quit IRC (Ping timeout: 268 seconds)02:50
*** Degi <[email protected]> has quit IRC (Ping timeout: 245 seconds)03:10
*** Degi <[email protected]> has joined #litex03:11
*** Melkhior <Melkhior!~Melkhior@2a01:e0a:1b7:12a0:225:90ff:fefb:e717> has quit IRC (Quit: Leaving)07:31
*** Melkhior <Melkhior!~Melkhior@2a01:e0a:1b7:12a0:225:90ff:fefb:e717> has joined #litex07:40
*** FabM <[email protected]> has joined #litex07:43
*** C-Man <[email protected]> has quit IRC (Ping timeout: 268 seconds)08:50
*** FabM <FabM!~FabM@armadeus/team/FabM> has quit IRC (Ping timeout: 260 seconds)16:01
*** peeps[zen] <peeps[zen]!~peepsalot@openscad/peepsalot> has quit IRC (Ping timeout: 268 seconds)16:55
*** peepsalot <peepsalot!~peepsalot@openscad/peepsalot> has joined #litex16:55
*** jersey99 <[email protected]> has joined #litex16:58
jersey99leons and david-sawatzke[m  I saw some issues going back and forth about ping and sram in liteeth. Coincidentally, when I try to rebuild 10g on kc705, ping seems to fail. Either way, is it safe to assume that master on liteeth is now safe to build 10G ethernet?17:00
david-sawatzke[mOnly for SRAM, for the hardware stack use https://github.com/enjoy-digital/liteeth/pull/8817:01
leonsjersey99: Depending on what your doing, I have quite a bunch of additions to the PHYXGMII TX code which I’m going to PR in a couple of hours17:07
leonsAlso the Etherbone core doesn’t yet work with the 64-bit data path because it needs some more buffering before/after the StrideConverter respectively. I can look into this in the next days if that’s something you want to use.17:09
jersey99I don't need etherbone, but I do need a working UDP port17:10
jersey99so, am I hearing is that Master should work for standard operation?17:10
jersey99*so, am I hearing that Master should work for standard operation?17:10
jersey99leons .. I am guessing your additions are for the case when the lanes are not aligned?17:11
jersey99your additions to the PHYXGMII I mean17:11
leonsWell, you do need alignment as per IEEE on lane 0, that’s the first and fifth octet in the 64-bit bus word. But yes, the TX side after these changes now maintains its own IFG logic to allow packet transmission to start on the fifth octet. Also it now optionally implements DIC to maintain an actual effective data rate of 10GBit/s17:14
leonsNeed to adapt the simulation still and clean up my unit tests for the PHY, but otherwise I’m pretty confident it works17:15
leonsAll of this is especially important if you’re implementing switching logic, etc.17:15
jersey99Awesome, that is good stuff. For now I will spend some time looking at signals to see why ping fails on master.17:16
leonsOh, for that you should try my open PR17:16
leonshttps://github.com/enjoy-digital/liteeth/pull/8817:16
david-sawatzke[mhttps://matrix.fs-ei.de/.well-known/matrix/server17:17
david-sawatzke[mSorry, wrong chat17:17
jersey99ok. Let me try that leons17:17
leonsjersey99: if I may ask, which FPGAs do you use? Xilinx? And if so which generation?17:19
jersey99My 10g core worked with VUP17:19
jersey99*works17:19
jersey99and for having a uniform test platform, I setup a KC705 with some liteeth collaborators17:20
jersey99all Xilinx17:20
leonsOkay. I’m working with a Kintex UltraScale+ and a 7-Series Virtex. How are you instantiating the XGMII-compatible interface? Xilinx IP core or interface to the transceiver somehow?17:22
jersey99for now a xilinx IP core with an XGMII port17:22
jersey99which V7 board are you using?17:23
jersey99I do have a version somewhere with the transceiver interface, which I haven't touched in 2 years. Let me know if you are going that route.17:23
leonsMy V7 board is the NetFPGA-SUME17:24
jersey99I remember setting it up, and looking at some signals, but I didn't see it through. At that point I really only cared about TX17:24
jersey99ok. I have an HTG-703 on me, and it's a V7 with GTH transceivers17:24
jersey99how are you interfacing the XGMII?17:25
leonsI think we should really stick to XGMII, that’s already sufficiently complex and at least cross-vendor. But it might be worthwhile to have a Xilinx transceiver to XGMII bridge in Migen as well17:25
jersey99Well, you know, if there is a way to avoid a blackbox, we will eventually ...17:26
jersey99For now, let's stick to XGMII17:27
leonsFor the 7-Series I’m using the PCS/PMA core by Xilinx, for USP I’m borrowing the Transceiver to XGMII logic from the verilog-ethernet project17:27
jersey99Ah I see, why not use the PCS/PMA core for that as well?17:27
leonsIsn’t compatible with USP17:27
jersey99Is that true? I managed to synthesize and run the XGMII interface on a Virtex USP.17:28
leonsHuh, that’s interesting… maybe my Vivado is too old. I find Xilinx IP core versioning and documentation really confusing. But hey, now we know it works with two XGMII “PHY”s17:31
leonsI’m not sure I understand your point about Blackboxes though. I don’t thing it’s reasonable and worthwhile to get rid of the transceivers wizard core, but translating that to XGMII in a separate module (such that the XGMII to stream adapter is still separate) would be really cool17:32
jersey99Well, I haven't tried the latest XGMII phy from you yet. But I assume it will just work. I have the old xgmii.py working with 7-series Kintex, and US and USP Virtex17:32
leonsYeah, I was worried that I might rely on any implementation specific behavior of the XGMII but I don’t think so17:33
jersey99I just meant that, Xilinx IP core that generates an XGMII interface is something that we could get rid of.17:36
leonsjersey99: in case you want to test it already, here’s a current WIP revision. the xgmii.py is tested on real HW at line-rate and should already work: https://github.com/lschuermann/liteeth/blob/dev/compliant-xgmii/liteeth/phy/xgmii.py17:37
jersey99Cool. Also, out of curiosity, how are you testing line rate? :)17:39
*** mm002 <[email protected]> has joined #litex17:40
*** peepsalot <peepsalot!~peepsalot@openscad/peepsalot> has quit IRC (Read error: Connection reset by peer)17:40
*** mm003 <[email protected]> has quit IRC (Read error: Connection reset by peer)17:40
*** peepsalot <peepsalot!~peepsalot@openscad/peepsalot> has joined #litex17:41
leonsjersey99: that's an excellent question. My tests have been primary creating a crossover-connection between two SFP+s at the stream-interface level, and then running regular traffic through the FPGA between two machines with NICs from different vendors (Aquantia, Intel, Solarflare)17:46
leonsI've also built a really stupid PacketStreamer module which creates packets with scapy, puts them into a read-only memory and the FPGA just spews them out as fast as possible17:47
leonsFor development I've written some synthetic test cases, but it turns out that when you control both the implementation and the test cases, it's really trivial to implement it in a non-compliant way17:48
jersey99haha, indeed17:48
leonsSo for quicker development cycles I've built a crossover-connection between the XGMII interfaces of the two transceivers, captured a bunch of bus data using the ILA, exported as CSV and built an XGMIICSVInjector 😀17:49
leonsIt's still very broken code though, need to rework the tests. But it's been sooo much better than synthetic tests or straight up building hardware. 10 Gbit/s is way to fast for typical debugging approaches 😕17:50
jersey99Haha .. while building, I found that Verilator testbench [xgmii_ethernet.c] helped the most.17:51
jersey99for line rate, it was practically impossible to test. So I did some persistence on the scope, with a blip that emitted packets, and could see that valid signals pretty much took up the whole duty cycle. Then I used some checksums and counters in the packets to see I wasn't dropping anything, and dumped chunks of network traffic, to assure myself.17:53
jersey99None of these is concrete mind you17:53
leonsYes, that's been a blessing, however when you get into the really weird stuff of shifted transmissions due to START alignment on the fifth byte or implementing DIC, it didn't help me any more because I've missed parts of the IEEE802.3 standard. So the xgmii_ethernet.c didn't help much for this17:53
jersey99What you have is definitely sophisticaed17:53
leonsThat's probably why the IFG and DIC was a nightmare to build17:53
jersey99yea, what is your application? A router?17:54
leonsI hope the documentation will spare others from making the same mistakes I did 🙂17:54
leonsCurrently just streaming data at approximately line rate, but after my current project I'm very much looking forward to building a switch/router type of thing17:54
jersey99cool17:55
leonsThat's why I wanted to pave the way for this already. Also it feels better to have a foundation which works reliably and won't be a bottleneck later on17:55
leonsWhat do you try to build, if I may ask? (at risk of spamming this channel :))17:56
jersey99definitely. btw, I just ran ping on the PR 88 branch, and it is intermittent as of now. Let me look closer17:56
jersey99My goal is to evacuate as much data as possible from the FPGA17:58
jersey99all data coming from ADCs17:58
leonsre ping: It might well be a problem with the Packetizer/Depacketizer. Feel free to open that Pandora's box, but be warned: there lies madness.17:58
jersey99lol17:59
jersey99I am well aware .. I have spent countless hours 2 years ago with packet.py17:59
leonsI'm at >3 weeks full time last time I checked17:59
jersey99I spent a whole month with 64 bit data path and xgmii. And I was happy to have something to take home after that. You are doing great!18:01
*** TMM_ <[email protected]> has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.)18:36
*** TMM_ <[email protected]> has joined #litex18:36
_florent_leons: Don't worry about spamming the channel with use-cases/applications, that's generally a lot more interesting than issues/implementation details :)18:37
_florent_That's great otherwise to see you all improving/collaborating on LiteEth!18:39
_florent_leons: I see #88 is still a draft, but it looks fine and could be merged if you want.18:40
leonsah, yes. For one I wanted to update the description to something describing the issue at hand a bit better, and I was looking for feedback precisely like the one from jersey9918:44
leonsAFAIK the problem was easily reproducible using ping on the 32-bit data path of david-sawatzke. Maybe he can confirm that it works on the proper hardware as well?18:45
_florent_Sure, let's wait a bit then18:49
jersey99@leon20:25
jersey99leons I can confirm the problem still exists. I set a stream of data to flow out from the UDP port to a known destination IP/Port. This basically is currently failing, as the core tries to send ARP requests to get the destination mac. Traditionally, a quick patch for this has been to populate the arp cache with the destination ip/mac tuple inside20:28
jersey99arp.py. I can confirm that even patching that doesn't satisfy the core. The problem is deeper than what I looked for. Also, I do see that different parts of the core now use different versions of packet (litex.soc.interconnect.packet , or liteeth.packet), maybe related to that? idk20:28
leonsah, yes, I think I've stumbled over that exact same issue!20:29
leonsI do not yet have a solution for that. I suspect it's an issue with the ARP implementation though. I did look at the Packetizers/Depacketizers involved for doing the ARP logic and these looked as if they were behaving correctly.20:30
leonsWhere do you see `litex.soc.interconnect.packet` being used? I think the Ethernet core should use only the `liteeth.packet` one.20:31
jersey99liteeth/mac/common.py20:32
leonsOh, I think imports from `litex.soc.interconnect.packet` are fine, as long as they are not using the Packetizer/Depacketizer from there.20:33
leonshttps://github.com/enjoy-digital/liteeth/blob/master/liteeth/mac/common.py#L10 looks fine to me20:34
jersey99Nevermind, my bad. All the Packetizers are from liteeth.packet20:35
leonsNo worries! It's great to have someone reproduce that issue.20:36
jersey99Another question for you. On the 7-series pcs/pma core, what clock do you use for the XGMII? coreclk? leons20:40
jersey99That is the recommended TX clock20:40
leonsYes, the clockclk provided by the pcs/pma core. As far as I understand this should be generated from a PLL driven by the mgtrefclk input, right?20:43
leonsI've got to admit: for the transceivers I'm just puzzling things together I find on the web until it works. Was really happy when trial and error did finally lead to a working config on my KUP. And the 7-series config is taken verbatim from verilog-ethernet.20:45
leonsjersey99: here you go: https://gist.github.com/lschuermann/ab5c34c9f48d00a93f0174fcbe5e62e020:53
*** jersey99 <[email protected]> has quit IRC (Quit: Client closed)20:54
*** zjason <[email protected]> has quit IRC (Read error: Connection reset by peer)21:00
*** zjason <[email protected]> has joined #litex21:02
*** jersey99 <[email protected]> has joined #litex21:03
somloquestion for anyone familiar with linux-on-litex-vexriscv: when booting via boot.json, the kernel image referred to as "Image" in e.g. here: https://github.com/litex-hub/linux-on-litex-vexriscv/blob/master/images/boot.json22:42
somlois that the result of building the kernel, e.g. whatever ends up in `arch/riscv/boot/Image` ?22:43
somloif not, where does it (ultimately, after one peels back all the buildroot layers) come from?22:44
*** mm002 <[email protected]> has quit IRC (Quit: Leaving)23:14

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!