1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Small Form Factor Kobol Helios64

Discussion in 'Hardware' started by Gareth Halfacree, 25 Oct 2020.

  1. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Rather than cram up the Latest Purchases thread, I'll pop various thoughts in here instead.

    Starting with this being supremely satisfying (serial numbers munged):

    upload_2020-10-25_11-56-55.png

    Dymo 11355s, if you're wondering. They fit perfectly on the handles, curled around. Dunno how long the adhesive will hold up, mind, but they'll do nicely for now. Can't decide whether to put "SPARE" on the spare ones or leave 'em blank now, though...
     
    IanW, David, Yaka and 1 other person like this.
  2. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Step one...

    Screenshot from 2020-10-25 12-15-34.png
     
  3. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Code:
    blacklaw@helios64:~$ lscpu
    Architecture:                    aarch64
    CPU op-mode(s):                  32-bit, 64-bit
    Byte Order:                      Little Endian
    CPU(s):                          6
    On-line CPU(s) list:             0-5
    Thread(s) per core:              1
    Core(s) per socket:              3
    Socket(s):                       2
    NUMA node(s):                    1
    Vendor ID:                       ARM
    Model:                           4
    Model name:                      Cortex-A53
    Stepping:                        r0p4
    CPU max MHz:                     1800.0000
    CPU min MHz:                     408.0000
    BogoMIPS:                        48.00
    NUMA node0 CPU(s):               0-5
    Vulnerability Itlb multihit:     Not affected
    Vulnerability L1tf:              Not affected
    Vulnerability Mds:               Not affected
    Vulnerability Meltdown:          Not affected
    Vulnerability Spec store bypass: Vulnerable
    Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
    Vulnerability Spectre v2:        Vulnerable
    Vulnerability Srbds:             Not affected
    Vulnerability Tsx async abort:   Not affected
    Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
    
    I feel like there should be mitigations listed for those vulnerabilities... Hmm. Also, that's all a little misleading: while two of the six cores are Cortex-A72 1.8GHz, four are lower-power Cortex-A53 1.4GHz ones. In theory the scheduler should be smart enough to shuffle work accordingly.

    Code:
    blacklaw@helios64:~$ free -h
                  total        used        free      shared  buff/cache   available
    Mem:          3.7Gi       160Mi       3.3Gi       5.0Mi       268Mi       3.4Gi
    Swap:         1.9Gi          0B       1.9Gi
    
    lshw says it's seeing my two 6TB disks and my 240GB M.2 drive, so that's a start. It can see both my Ethernet ports, but the 2.5Gb one's showing a capacity of 1Gbit/s - that's waiting on software support, I believe. (Armbian's currently listed as "work in progress" for this, so there'll be a few things like that I come across.)
     
  4. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Nope, looks like I'm vulnerable. Boo-urns.

    Code:
    > SUMMARY: CVE-2017-5753:OK CVE-2017-5715:OK CVE-2017-5754:OK CVE-2018-3640:KO CVE-2018-3639:KO CVE-2018-3615:OK CVE-2018-3620:OK CVE-2018-3646:OK CVE-2018-12126:OK CVE-2018-12130:OK CVE-2018-12127:OK CVE-2019-11091:OK CVE-2019-11135:OK CVE-2018-12207:OK CVE-2020-0543:OK
    
    I'll have to see if anyone's flagged that up on the forum. It's not a massive problem, as it's not like I'll be using the thing to browse the web or play games or owt - but it'd be nice to have it fixed anyway.

    PWM fans are nice: pretty much silent, as far as I can tell, but they don't turn off at idle - just spin slowly. Loud when you reboot and they default to full speed, mind. Could always throw in a couple of Noctuas with speed-reducing wire, if I fancy.

    I'll run a few benchmarks now, compare it to my current server (x86) and a Pi 4.
     
  5. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    upload_2020-10-25_13-10-16.png

    What's surprising here is how modern Arm stuff can go toe-to-toe with an, admittedly old and low-power, x86 box and not come off too badly. There's no surprise that the N54L wins on single-thread performance, given it can clock to 2.2GHz, but it's nice to see the Helios64 winning on multi-thread thanks to the six cores.

    upload_2020-10-25_13-11-39.png
    Just the two servers head-to-head this time, on cryptographic operations. Again, the 2.2GHz x86 system spanks the 1.4/1.8GHz Arm box, though somehow the Helios64 wins on SHA512 performance. Go figure.

    upload_2020-10-25_13-17-56.png
    Linpack double-precision here. Outdated, I know, but I've got figures for the other systems already so it's nice to include, y'know.

    So, is the Helios64 faster than my existing N54L? Maybe, depending on what you're doing. The extra cores are nice, that's for sure.
     

    Attached Files:

    Goatee likes this.
  6. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    There was another reason for wanting to upgrade from a two-core server, by the way: GNU Parallel.

    I've started scanning some old magazines, and part of the process involves taking the 600dpi scans and resampling them to 300dpi. Takes ages.

    So, I use GNU Parallel to run 16 ImageMagick workers at once. Does a good job, too, as this trial run on 100 test images proves:

    Code:
    blacklaw@shodan:/media/RAM Disk/testing$ time parallel --trc {.}-resampled.png -S : --ungroup --progress 'convert -limit thread 1 -density 600x600 -units PixelsPerInch {} -resample 300 -units PixelsPerInch -density 300x300 -units PixelsPerInch +repage {.}-resampled.png' ::: *png
    parallel: Warning: --trc ignored as there are no remote --sshlogin.
    
    Computers / CPU cores / Max jobs to run
    1:local / 16 / 16
    
    Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
    local:0/100/100%/2.8s
    
    real   4m43.183s
    user   7m50.102s
    sys   56m16.030s
    
    Wouldn't it be nice if I had more cores, though? Well, that Helios64 has six cores just sat doing nowt most of the time. Okay, they're low-power Arm cores - but there's still six of the buggers...

    Code:
    blacklaw@shodan:/media/RAM Disk/testing$ time parallel --trc {.}-resampled.png -S :,helios64.local --ungroup --progress 'convert -limit thread 1 -density 600x600 -units PixelsPerInch {} -resample 300 -units PixelsPerInch -density 300x300 -units PixelsPerInch +repage {.}-resampled.png' ::: *png
    
    Computers / CPU cores / Max jobs to run
    1:local / 16 / 16
    2:helios64.local / 6 / 6
    
    Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
    local:0/79/79%/2.8s  helios64.local:0/21/21%/10.4s  
    
    real   3m39.064s
    user   6m35.466s
    sys   43m51.553s
    
    That's taken it from 4m43s running purely on localhost to 3m39s running on both localhost and the Helios64. Saves me over a minute per 100 pages, in other words. Times that by a few hundred magazines, and that's a serious saving.

    (You can see the relative performance there, too: each page takes about 2.8s of wall time on my Ryzen 2700X, and about 10.4s on the Helios64's Rockchip RK3399.)
     
  7. RedFlames

    RedFlames ...is not a Belgian football team

    Joined:
    23 Apr 2009
    Posts:
    13,408
    Likes Received:
    2,079
    it does look like a polished piece of kit.
     
  8. Yaka

    Yaka Well-Known Member

    Joined:
    26 Jun 2005
    Posts:
    1,757
    Likes Received:
    180
    nifty server
     
  9. David

    David Take my advice — I’m not using it.

    Joined:
    7 Apr 2009
    Posts:
    14,608
    Likes Received:
    3,121
    So, what about the UPS battery? Does it just cover you long enough to allow an elegant shutdown after power loss? Or is it more substantial? Given the cost of LiPo UPS batteries, I'm guessing the space limitations mean it'll cover you for a few minutes?
     
  10. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    I'm thinking minutes, yes, but so far I've only tested it long enough to unplug the PSU and move it into an energy monitor. I'm going to wait until I'm sure it's charged, then do a test.

    Which is going to be guesswork: there's no charge monitoring, I don't think - just a thing that goes from 1 to 0 when mains power is lost and you can use to trigger a safe shutdown. Doesn't matter to me, because it'll be on the other side of a real UPS which doesn't have Arm software - so it can run from the UPS battery until expired then from its internal battery to actually shut down.

    EDIT:
    Oh, I forgot: the thing also works (will work, it's not ready yet) in USB Direct Attached Storage (DAS) mode - so if you outgrow five bays, you can stack another on top for ten. And another for 15. Then you'd need a USB hub for more, I guess.
     
    Last edited: 25 Oct 2020
  11. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Trying it now. Found a status thing for the "gpio-charger" device which read "Not charging", and the idle power draw went from 16.5W to 15.5W so I'm taking that as evidence it was charged.

    Fun fact: when you unplug the mains, gpio-charger/status goes from "Not charging" to... "Charging." Skills.

    EDIT:
    About nine minutes and counting so far. Bear in mind, though, I've hdparm -Y'd all the drives, here, so this is going to be a best-case scenario.

    EDIT:
    Up to the 15 minute mark now. Not going to lie, it's doing quite a bit better than expected. Should be plenty of time to safely shut down, even with all five drive bays filled. Might have been nice to have a status LED for it on the front, tho'. Even if it was just "Charge/Discharge" and "Ready" or something - there's enough data in the gpio-charger device for that, if nothing else.

    EDIT:
    21 minutes. Wonder how long I'mma have to sit here? Should have automated the process instead of just pulling the plug and pressing go on a stopwatch. Bah.

    EDIT:
    Coming up to 40 minutes now. Blimey. I know the drives aren't spinning, but even so.

    EDIT:
    45 minutes. Bored of this game now.

    EDIT:
    An hour and it's still running. I guess I shouldn't be so surprised, the hard drives are the bit that really draw the power. Might spin 'em up tomorrow and run another test - and just put up with the fact I'll get an early "Unexpected power loss" entry in their SMART logs as a result.

    EDIT:
    An hour and 15. I really regret deciding to give this "a quick test" now.

    EDIT:
    Hour and a half. At least if I go for a pee now and miss the actual power-off, the error margin when I get back and hit stop on the stopwatch won't be so bad...

    EDIT:
    I'm calling time on this at 1h35m - I'll run a more realistic test with the drives powered up tomorrow, leave the thing charging overnight. Still, seems like those two little lithium-ion cells hold more juice than expected!
     
    Last edited: 25 Oct 2020
    IanW and David like this.
  12. bawjaws

    bawjaws Well-Known Member

    Joined:
    5 Dec 2010
    Posts:
    3,972
    Likes Received:
    664
    Sounds like a custom modding job. This is bit-tech, after all.
     
  13. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Doable, there is a GPIO header in there. Wouldn't fancy trying to drill a neat hole through the thing, tho'.
     
  14. yuusou

    yuusou Well-Known Member

    Joined:
    5 Nov 2006
    Posts:
    2,417
    Likes Received:
    508
    I know some people that are all about drilling neat holes.
    [​IMG]
     
    Last edited: 26 Oct 2020
  15. liratheal

    liratheal Sharing is Caring

    Joined:
    20 Nov 2005
    Posts:
    11,929
    Likes Received:
    1,373

    That's why they make LED bezels. To hide your crimes.
     
  16. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Started the real test, with two hard drives and an SSD in idle-but-awake, at 0837. Still running.
     
  17. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    @David Test complete: with the SSD and two 6TB hard drives spun up but otherwise idle, the thing ran for 1h39m before unceremoniously switching off. The promised "it'll safely shutdown when it gets to a certain percent battery" is nowhere to be seen - and I don't think it ever will arrive, 'cos as far as I can see there's no way for the OS to know the charge level. The best you're likely to get is being able to set a timer going when the power fails and shut down once it's expired.

    Which, to be fair, with an hour and a half to play with, ain't so bad. Could set the timer for an hour and still have plenty left in the tank for a safe shutdown.
     
    David likes this.
  18. yuusou

    yuusou Well-Known Member

    Joined:
    5 Nov 2006
    Posts:
    2,417
    Likes Received:
    508
    Is there no way to read the battery level through the GPIO?
     
  19. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Not as far as I can see: the only uevent values exposed are "status" which reads "Not charging" when it's full and "Charging" when it's discharging or charging, a type of "Mains", an online status of 1 when mains power is active and 0 when it isn't, and the name "gpio-charger". That's yer lot.

    Code:
    blacklaw@helios64:~$ cat /sys/class/power_supply/gpio-charger/uevent
    POWER_SUPPLY_NAME=gpio-charger
    POWER_SUPPLY_TYPE=Mains
    POWER_SUPPLY_ONLINE=1
    POWER_SUPPLY_STATUS=Charging
    
    You can combine "online" and "status" to get charging, charged, discharging (i.e. "1" and "Charging" means charging, "1" and "Not charging" means charged, "0" and "Charging" actually means discharging, and "0" and "Not charging" means you've done something magical and managed to read the status out of a machine that's powered off), but there's nothing I've found that'd give you even a sniff of a voltage - never mind actual charge statistics.

    It's possible there's something just awaiting software support, but I'm not holding my breath.
     
  20. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    14,007
    Likes Received:
    2,955
    Oh, while I'm here, programmatic proof that the scheduler is indeed clever enough to schedule work on the two "big" cores by preference.

    Simple ten-second synthetic benchmark, no smart options:

    Code:
    blacklaw@helios64:~$ sysbench --test=cpu run
    WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
    sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)
    
    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time
    
    
    Prime numbers limit: 10000
    
    Initializing worker threads...
    
    Threads started!
    
    CPU speed:
        events per second:  1782.83
    
    General statistics:  
        total time:                          10.0004s
        total number of events:              17847
    
    Latency (ms):
             min:                                    0.56
             avg:                                    0.56
             max:                                    2.08
             95th percentile:                        0.57
             sum:                                 9994.48
    
    Threads fairness:
        events (avg/stddev):           17847.0000/0.00
        execution time (avg/stddev):   9.9945/0.00
    
    So, that's 1,782.83 events per second.

    Now, on these big.LITTLE things the first cores are the LITTLEs and the last cores are the bigs. This is a hexacore, so that's 0-3 as LITTLEs and 4-5 as bigs. Let's start by forcing the benchmark to run on CPU0, one of the 1.4GHz A54 LITTLEs:

    Code:
    blacklaw@helios64:~$ taskset -c 0 sysbench --test=cpu run
    WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
    sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)
    
    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time
    
    
    Prime numbers limit: 10000
    
    Initializing worker threads...
    
    Threads started!
    
    CPU speed:
        events per second:   706.07
    
    General statistics:
        total time:                          10.0007s
        total number of events:              7065
    
    Latency (ms):
             min:                                    1.41
             avg:                                    1.41
             max:                                    1.57
             95th percentile:                        1.42
             sum:                                 9996.35
    
    Threads fairness:
        events (avg/stddev):           7065.0000/0.00
        execution time (avg/stddev):   9.9964/0.00
    
    706.07 events per second - over 1,000 EPS less. There's little doubt, then, that the first run was on a big core - but just to be sure...

    Code:
    blacklaw@helios64:~$ taskset -c 5 sysbench --test=cpu run
    WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
    sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)
    
    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time
    
    
    Prime numbers limit: 10000
    
    Initializing worker threads...
    
    Threads started!
    
    CPU speed:
        events per second:  1790.62
    
    General statistics:
        total time:                          10.0004s
        total number of events:              17913
    
    Latency (ms):
             min:                                    0.56
             avg:                                    0.56
             max:                                    1.02
             95th percentile:                        0.57
             sum:                                 9994.90
    
    Threads fairness:
        events (avg/stddev):           17913.0000/0.00
        execution time (avg/stddev):   9.9949/0.00
    
    Back to 1,790-ish. So, there you go: Linux is, indeed, smart enough to schedule jobs properly on a big.LITTLE.
     

Share This Page