Saturday, 2021-04-03

*** tpb has joined #symbiflow00:00
*** curtosis[away] has joined #symbiflow00:41
*** Degi_ has joined #symbiflow01:13
*** Degi has quit IRC01:15
*** Degi_ is now known as Degi01:15
*** curtosis[away] has quit IRC01:16
*** Bleepshop has joined #symbiflow01:51
*** hansfbaier has joined #symbiflow02:03
*** curtosis has joined #symbiflow02:08
*** cr1901_modern has quit IRC02:33
*** cr1901_modern has joined #symbiflow02:34
*** curtosis has quit IRC02:34
*** curtosis[away] has joined #symbiflow02:36
*** curtosis[away] has quit IRC02:50
*** hansfbaier has quit IRC03:00
*** Bleepshop has left #symbiflow04:30
*** kraiskil has joined #symbiflow06:07
*** kgugala has joined #symbiflow06:24
*** kgugala_ has quit IRC06:27
*** kgugala has quit IRC06:27
*** kgugala has joined #symbiflow06:27
*** kraiskil has quit IRC08:29
*** kraiskil has joined #symbiflow09:51
*** proteusguy has quit IRC10:31
*** proteusguy has joined #symbiflow10:45
*** kraiskil has quit IRC11:22
*** kraiskil has joined #symbiflow14:09
sf-slack<imruinland> Hi, has anyone tried to build Arty A7 bitstream for Linux-on-LiteX/Vexriscv with SymbifFlow?  Currently I bumped into the an error during synth phase : ```Setting parameter \IO_LOC_PAIRS to value sdcard_cd:H2 on cell $iopadmap$digilent_arty.sdcard_cd   ERROR: Cell $iopadmap$digilent_arty.sdcard_cd of type \IBUF doesn't support the \SLEW attribute``` After removing sdcard, still the flow fails during making netlist :14:43
sf-slack`Message: Failed to find matching architecture model for 'BSCANE2'`  The output of vpr log is attached below :14:43
sf-slack<kgugala> hi @imruinland did you try LiteX linux from the examples repo? https://symbiflow-examples.readthedocs.io/en/latest/building-examples.html#linux-litex-demo14:56
tpbTitle: Building example designs SymbiFlow examples documentation (at symbiflow-examples.readthedocs.io)14:56
sf-slack<kgugala> https://github.com/SymbiFlow/symbiflow-examples14:56
sf-slack<kgugala> the log you pasted ends with `Message: Failed to find matching architecture model for 'BSCANE2'`14:57
sf-slack<kgugala> `BSCANE2` is not yet supported by the open toolchain14:57
*** daniellimws has quit IRC15:16
*** daniellimws has joined #symbiflow15:16
sf-slack<imruinland> @kgugala Hi, not really. I tried the one from Linux-on-Litex/Vexriscv official repo. I saw the commit indicating the toolchain selection of SymbiFlow has been available to Arty A7, so I gave it a shot : https://github.com/litex-hub/linux-on-litex-vexriscv/commit/1b487164cac1faa772e97d37448b1e0722c95ec8  The reason why I want to use the official repo is that I'd like to do have a 2core SMP with recently added RVC &15:37
sf-slackFPU support.15:37
sf-slack<kgugala> You can try follow the guide from examples and simply use smp+fpu cpu version15:43
sf-slack<imruinland> @kgugala, thanks for the tips :-) So basically I just need to modify the build command in example to : `./digilent_arty.py --toolchain symbiflow --cpu-type vexriscv_smp --cpu-count=2 --with-fpu --with-rvc --build` then it should work ?15:46
sf-slack<kgugala> Yep15:46
sf-slack<imruinland> Thanks a lot, gonna give it a try :)15:46
*** curtosis has joined #symbiflow15:55
sf-slack<riddhima23singh> Can someone help me with the argument of  "--part" and confirm the path of root database as this is giving error?15:56
sf-slack<imruinland> Sorry that I'm a bit confused. --cpu-count seems not being recognized by `digilent_arty.py`  I purged my Litex installation and re-installed from scratch again. Still the same.16:09
sf-slack<imruinland> Surely I guess I can change the default value16:10
sf-slack<imruinland> `digilent_arty.py: error: unrecognized arguments: --cpu-count 2`16:11
*** curtosis has quit IRC16:23
sf-slack<imruinland> Oops,  it fails to generate the the verilog files ...... ```[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list [info] Packaging /home/ruinland/lolv_3/pythondata-cpu-vexriscv-smp/pythondata_cpu_vexriscv_smp/verilog/ext/VexRiscv/target/scala-2.11/vexriscv_2.11-2.0.0.jar ... [info] Done packaging. [info] Running (fork) vexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen --cpu-count=216:23
sf-slack--ibus-width=32 --dbus-width=32 --dcache-size=4096 --icache-size=4096 --dcache-ways=1 --icache-ways=1 --litedram-width=128 --aes-instruction=False --out-of-order-decoder=True --wishbone-memory=False --fpu=True --cpu-per-fpu=4 --rvc=True --netlist-name=VexRiscvLitexSmpCluster_Cc1_Iw32Is4096Iy1_Dw32Ds4096Dy1_Ldw128_Ood_Fpu4_Rvc16:23
sf-slack--netlist-directory=/home/ruinland/lolv_3/pythondata-cpu-vexriscv-smp/pythondata_cpu_vexriscv_smp/verilog [info] [Runtime] SpinalHDL v1.4.4    git head : b42ad071474c2042bf13d9ff30147b0e378b322d [info] [Runtime] JVM max memory : 1700.0MiB [info] [Runtime] Current date : 2021.04.04 00:18:03 [info] [Progress] at 0.000 : Elaborate components [info]16:23
sf-slack********************************************************************************************** [info] [Warning] Elaboration failed (0 error). [info]           Spinal will restart with scala trace to help you to find the problem. [info] ********************************************************************************************** [info] [Progress] at 0.278 : Elaborate components [error] Exception in thread "main"16:23
sf-slackjava.lang.AssertionError: assertion failed [error] at scala.Predef$.assert(Predef.scala:156) [error] at spinal.core.package$.assert(core.scala:420) [error] at vexriscv.ip.DataCacheConfig.<init>(DataCache.scala:41) [error] at vexriscv.demo.smp.VexRiscvSmpClusterGen$.vexRiscvConfig(VexRiscvSmpCluster.scala:233) [error] at16:23
sf-slackvexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$$anonfun$parameter$1.apply(VexRiscvSmpLitexCluster.scala:146) [error] at vexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$$anonfun$parameter$1.apply(VexRiscvSmpLitexCluster.scala:145) [error] at scala.collection.generic.GenTraversableFactory.tabulate(GenTraversableFactory.scala:148) [error] at16:23
sf-slackvexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$.parameter(VexRiscvSmpLitexCluster.scala:145) [error] at vexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$$anon$1.<init>(VexRiscvSmpLitexCluster.scala:184) [error] at vexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$.dutGen(VexRiscvSmpLitexCluster.scala:182) [error] at vexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$$anonfun$29.apply(VexRiscvSmpLitexCluster.scala:192) [error] at16:23
sf-slackvexriscv.demo.smp.VexRiscvLitexSmpClusterCmdGen$$anonfun$29.apply(VexRiscvSmpLitexCluster.scala:192) [error] at spinal.core.internals.PhaseCreateComponent$$anonfun$impl$2.apply$mcV$sp(Phase.scala:2169) [error] at spinal.core.internals.PhaseCreateComponent$$anonfun$impl$2.apply(Phase.scala:2164) [error] at spinal.core.internals.PhaseCreateComponent$$anonfun$impl$2.apply(Phase.scala:2164) [error] at16:23
sf-slackspinal.core.fiber.Engine$$anonfun$create$1.apply$mcV$sp(AsyncCtrl.scala:144) [error] at spinal.core.fiber.AsyncThread$$anonfun$1.apply$mcV$sp(AsyncThread.scala:58) [error] at spinal.core.fiber.EngineContext$$anonfun$newJvmThread$1.apply$mcV$sp(AsyncCtrl.scala:39) [error] at spinal.sim.JvmThread.run(SimManager.scala:51) [error] Nonzero exit code returned from runner: 1 [error] (Compile / runMain) Nonzero exit code returned16:23
sf-slackfrom runner: 1 [error] Total time: 158 s, completed Apr 4, 2021 12:18:05 AM Traceback (most recent call last):   File "/home/ruinland/lolv_3/./litex-boards/litex_boards/targets/digilent_arty.py", line 157, in <module>     main()   File "/home/ruinland/lolv_3/./litex-boards/litex_boards/targets/digilent_arty.py", line 150, in main     builder.build(**builder_kwargs, run=args.build)   File16:23
sf-slack"/home/ruinland/lolv_3/litex/litex/soc/integration/builder.py", line 249, in build     self.soc.finalize()   File "/home/ruinland/lolv_3/migen/migen/fhdl/module.py", line 156, in finalize     subfragments = self._collect_submodules()   File "/home/ruinland/lolv_3/migen/migen/fhdl/module.py", line 149, in _collect_submodules     r.append((name, submodule.get_fragment()))   File "/home/ruinland/lolv_3/migen/migen/fhdl/module.py",16:23
sf-slackline 102, in get_fragment     self.finalize()   File "/home/ruinland/lolv_3/migen/migen/fhdl/module.py", line 157, in finalize     self.do_finalize(*args, **kwargs)   File "/home/ruinland/lolv_3/litex/litex/soc/cores/cpu/vexriscv_smp/core.py", line 423, in do_finalize     self.add_sources(self.platform)   File "/home/ruinland/lolv_3/litex/litex/soc/cores/cpu/vexriscv_smp/core.py", line 332, in add_sources16:23
sf-slackself.generate_netlist()   File "/home/ruinland/lolv_3/litex/litex/soc/cores/cpu/vexriscv_smp/core.py", line 245, in generate_netlist     raise OSError('Failed to run sbt') OSError: Failed to run sbt```16:23
sf-slack<kgugala> This is vexriscv issue16:25
sf-slack<kgugala> If there is no pregenerated vex config you requested LiteX will try to generate it16:25
sf-slack<kgugala> I'd ask on vex gitter about it16:26
sf-slack<kgugala> Or provide a pregenerated vex verilog16:26
sf-slack<imruinland> I see. I'm gonna pull a container up and re-construct the build world so as to clarify the situation.16:27
sf-slack<imruinland> Sorry for bothering you :)16:28
sf-slack<kgugala> No worries :) let us know how it went in the end16:29
*** cr1901_modern has quit IRC16:35
*** cr1901_modern has joined #symbiflow16:35
*** curtosis has joined #symbiflow16:51
-_whitenotifier-5- [symbiflow-examples] Mattia9875 opened issue #137: Missing plugin in yosys installation (conda) - https://git.io/JY1dr17:06
*** curtosis is now known as curtosis[away]17:12
*** curtosis[away] has quit IRC17:15
*** bjorkint0sh has quit IRC17:23
sf-slack<kgugala> what do you get when you run `digilent_arty.py --help` ?18:07
*** BryceSchroeder has joined #symbiflow18:08
sf-slack<kgugala> the `--cpu-count` comes from the CPU https://github.com/enjoy-digital/litex/blob/master/litex/soc/cores/cpu/vexriscv_smp/core.py#L5818:08
sf-slack<kgugala> it's not dependant on the platform18:09
BryceSchroederI tried to fuzz the XC7K420T, following the "how to add a new part to project xray" guide and my computer crashed. Not sure what happened. Before I try to figure out further, what kind of hardware, in terms of RAM and HD space, do I need to throw at that problem to expect it to work? Slightly concerned that no one has already done it since it seems like this part only requires Vivado and the inclination to do so. Am I missing18:10
BryceSchroedersomething, is there more to do beyond looking at the resources of the part in Vivado and plugging the coordinate ranges into the settings script for that part in project xray?18:10
sf-slack<kgugala> if this arg is not propagated to the top level it is a bug in LiteX18:10
sf-slack<kgugala> the fuzzers will run multiple instances of Vivado, memory requirements are determined by how much Vivado needs to handle the part you're targeting18:12
sf-slack<kgugala> you may try running less parallel jobs18:13
sf-slack<kgugala> note that running all the fuzzers may take significant amount of time18:13
*** futarisIRCcloud has joined #symbiflow18:20
BryceSchroederI think you were either talking to someone else with the same problem, or maybe to me with someone else's name? Either way I'll try running it with just one vivado and see if it crashes.18:20
BryceSchroederThanks.18:23
sf-slack<kgugala> BryceSchroeder: i was talking about your issue :)18:29
BryceSchroederCool - thank you. I really want to contribute something useful to this project, and I have an XC7K420T dev board on the way from Shenzhen to test with, hopefully18:30
BryceSchroederUnfortunately I've never really used Vivado as a "power user," though, so it may be hard going.18:30
BryceSchroederMostly I've just turned verilog or myhdl into bitstreams using whatever. There's a bit of a learning curve.18:31
mithroBryceSchroeder: The BYU team have actually been putting together a "FPGA Bootcamp" at https://byu-cpe.github.io/ComputingBootCamp/ that includes a lot of useful resources about getting started with advance Vivado18:32
tpbTitle: Home • Immerse Computing Bootcamp (at byu-cpe.github.io)18:32
BryceSchroederThanks, I'll check it out.18:33
mithroBryceSchroeder: Do note that nobody has really tried with the Kintex stuff at the moment and I would recommend having a pretty powerful computer to do this work18:35
BryceSchroederAlright. Well, I guess I'll upgrade if necessary. Maybe I should put Vivado on my home server, it's got 64G of memory (I hope that is enough? I don't have much of a sense of how demanding this is, quantitatively).18:38
BryceSchroeder(It's not the fastest, but shipping from shenzhen isn't the fastest either, so.)18:38
*** kraiskil has quit IRC19:52
*** BryceSchroeder has quit IRC22:45

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!