What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

1
1
0

I just visited the website of the European Data Protection Supervisor @EDPS and noticed that they have a @Mastodon icon / link in their social media navigation bar in the upper right side of the screen.

This is simply AMAZING!!!

Thank you so much for being here on the and for raising awareness about it with your website visitors ❤️

@EUCommissionEuropean Commission can you please be next? I argued for this in my latest article: blog.elenarossini.com/openness

A screenshot of the homepage of the European Data Protection Supervisor. There is a top navigation menu, links to articles and if you notice in the top right corner of the screen there is a social media navigation bar with a link to their Mastodon profilea close-up image of the social media navigation bar in the upper right corner of the screen of their website, showing the icons of (from left to right): X, MASTODON, LinkedIn, Instagram, Spotify, YouTube and RSS
0
7
0
0

【解説】播磨灘でイカナゴ漁が17日解禁も明日(18日)で終了、記録的不漁続く 今年の水揚げ量は?
news.ntv.co.jp/category/societ
ウォーッ毎年毎年獲れないのわかってるけど解禁→やっぱり少ないので即終了を繰り返すなーっ!!
「イカナゴは乱獲で減ったんじゃありませーん海の水が綺麗になりすぎて獲れなくなったんですうー」って言い訳してるけど、全然いないのわかってるのにいまだに漁やってるのを乱獲と言うんじゃ💢💢ハイハイ乱獲が原因で減ったわけじゃない、でいいから今やってる乱獲をやめろ💢

0
0
0
0
0
0
1

I refuse to take up jobs in which I train AI to do Physics, but I am taking up jobs in which I correct academic articles written by AI - and hoo boy I think I should charge more. Literally more annoying to edit and rewrite than writing stuff from scratch.

So yeah, hire this actual Indian in the first place. Faster and cheaper in the long run.

0
1
1
0
1
0
0
0

Hoofdredacteur van de Volkskrant Pieter Klok, schrijft in een commentaar met de kop 'Om de democratie te beschermen tegen big tech is steviger beleid nodig' het volgende:

'Als het de [Europese] Commissie menens is, als ze de democratie echt wil beschermen tegen de ondermijnende invloed van Big Tech, dan is steviger beleid nodig. De zwakte van de DSA is dat techbedrijven in wezen alleen een inspanningsverplichting hebben – ze hebben de plicht de verspreiding van desinformatie tegen te gaan – maar geen resultaatsverplichting: ze zijn in tegenstelling tot ander massamedia niet aansprakelijk voor de desinformatie en haat die via hun platformen wordt verspreid.

Bij een inspanningsverplichting is veel ruimte voor discussie. De techbedrijven kunnen zichzelf, hun gebruikers en de Europese Commissie wijsmaken dat ze hun uiterste best doen en ondertussen doorgaan met de verspreiding van haat.

Bij een resultaatverplichting wordt de wereld veel overzichtelijker: een platform dat strafbare inhoud verspreidt, wordt zelf strafbaar. Gezien de enorme omvang van de techbedrijven en gezien de geringe omvang van hun geweten, moet die optie serieus worden overwogen. In de tussentijd zou het goed zijn als gebruikers hun macht gebruiken om de bedrijven tot inkeer te brengen.'

Graag maak ik van de gelegenheid gebruik u alvast op dit wenkende handelingsperspectief te wijzen:
thefirewall.eu/

0
1
0
1
2
0
1
0
1
0
1
0
2
1
0
0
0
0
1
1
1
2
0
0
0

A few of the things I've learned in the run up to taping out our first chip that working with FPGAs had not prepared me for (fortunately, the folks driving the tape out had done this before and were not surprised):

  • There's a lot of analogue stuff on a chip. Voltage regulators, PLLs, and so on all need to be custom designed for each process. They are expensive to license because they're difficult to design and there are only a handful of companies buying them. The really big companies will design their own in house, but everyone else needs to buy them. The problem is that 'everyone else' is not actually many people.
  • Design verification (DV) is a massive part of the total cost. This needs people who think about the corner cases in designs. The industry rule of thumb is that you need 2-3 DV engineers per RTL engineer to make sure that the thing that you tape out is probably correct. In an FPGA, you can just fix a bug and roll a new bitfile, but with custom chip you have a long turnaround to fix a bug and a lot of costs. This applies at the block level and at the system level. Things like ISA test suites are a tiny part of this because they're not adversarial. To verify a core, you need to understand the microarchitecture-specific corner cases where things might go wrong and then make sure testing covers them. We aren't using CVA6, but I was talking to someone working on it recently and they had a fun case that DV had missed: If a jump target spanned a page boundary, and one of those pages was not mapped, rather than raising a page fault the core would just fill in 16 random bits and execute a random instruction. ISA tests typically won't cover this, a good DV team would know that anything spanning pages in all possible configurations of permission and presence (and at all points in speculative execution) is essential for functional coverage.
  • Most of the tools for the backend are proprietary (and expensive, and with per-seat, per-year licenses). This includes tools for formal verification. There are open-source tools for the formal verification, the proprietary ones are mostly better in their error reporting (if the checks pass, they're fine. If they don't, debugging them is much harder).
  • A lot of the vendors with bits of IP that you need are really paranoid about it leaking. If you're lucky, you'll end up with things that you can access only from a tightly locked-down chamber system. If not, you'll get a simulator and a basic floorplan and the integration happens later.
  • The back-end layout takes a long time. For FPGAs, you write RTL and you're done. The thing you send to the fab is basically a 3D drawing of what to etch on the chip. The flow from the RTL to the 3D picture is complex and time consuming.
  • On newer processes, you end up with a load of places where you need to make tradeoffs. SRAM isn't just SRAM, there are a bunch of different options with different performance, different leakage current, and so on. These aren't small differences. On 22fdx, the ultra-low-leakage SRAM has 10% of the idle power of the normal one, but is bigger and slower. And this is entirely process dependent and will change if you move to a new one.
  • A load of things (especially various kinds of non-volatile memory) use additional layers. For small volumes, you put your chip on a wafer with other people's chips. This is nice, but it means that not every kind of layer happens on every run, which restricts your availability.
  • I already knew this from previous projects, but it's worth repeating: The core is the easy bit. There are loads of other places where you can gain or lose 10% performance depending on design decisions (and these add up really quickly), or where you can accidentally undermine security. The jump from 'we have RTL for a core' to 'we have a working SoC taped out' is smaller than going to that point from a standing start, but it's not much smaller. But don't think 'yay, we have open-source RTL for a RISC-V core!' means 'we can make RISC-V chips easily!'.
  • I really, really, really disapprove of physics. It's just not a good building block for stuff. Digital logic is so much nicer.
0
6
0
0
1
1
1
0
0
0
0
1
0