Search results

The is now LIVE! This is the moment you've all been waiting for!

We're really excited to launch PyOhio 2026 with a bang, and we really want to hear what you have to say! Our CFP is open for ANYONE to submit, regardless of experience level! Our CFP will be open until April 19, AoE, so get those proposals started! You can submit up to 3 talk proposals using the form here: pretalx.com/pyohio-2026/cfp

Boosts are appreciated, and thank you for your support!

Artist conception of the of the NASA Space Launch System rocket in flight in a clear blue sky.  The word "PyOhio 2026 IS GO!" have been superimposed over the sky background.

Image credit: NASA/MSFC, Public domain, via Wikimedia Commons
0
0
0

FR - (English follows - see comments) Apprendre Python pour mon usage de données en sciences sociales est l'un de mes objectifs actuels.
J'avais de graaaaaaaaandes ambitions pour cette semaine de relâche qui s'achève déjà (bon, on a eu une heure de moins avec le retour à l'heure d'été — ça compte un peu, non ?)
Alors…
✔️ Suivi (en ligne) l'excellente introduction de @emilienschultz dans le cadre de @urfistlyon@social.siences.re : une formation qui propose une réflexion sur ce que fait un langage informatique à nos données — avec toutes ses implications —, bien au-delà des considérations purement techniques ou "utilitaires". Je ne pourrai malheureusement pas assister aux deux séances suivantes, offertes en présence…
✔️ Récupéré quelques manuels et tutoriels pour me lancer dans l'aventure
✔️ Et, ce qui est toujours essentiel pour moi : du "contexte". En l'occurrence, le documentaire accessible en ligne : Python, The Documentary (Cult. Repo) — recommandé par Émilien lors de la formation. Une plongée fascinante dans la création de Python — né d'un échec —, son développement, les défis de gouvernance d'une communauté qui n'a cessé de s'élargir, quelques-unes de ses crises, et l'intégration — tardive mais réelle — des femmes de la tech, avec l'émergence des , et aussi l'une de ses figures phares : @mariattaMariatta 🤦🏻‍♀️ :python: . Le documentaire raconte l'histoire principalement du point de vue des fondateurs et premiers développeurs plutôt que de la communauté dans son ensemble — il reste sans doute beaucoup à dire —, mais c'est une magnifique introduction à ce qui est devenu l'un des langages informatiques incontournables.

Et maintenant... plongée dans les choses sérieuses : du python !

youtu.be/GfH4QL4VqJ0

ENG - Learning Python for data work in the social sciences is one of my current goals.
I had graaaaaaaat ambitions for this spring break week, which is already drawing to a close… (we did lose an hour to daylight saving time — that counts for something, right?)
So…
✔️ Followed the excellent introductory session by Emilien Schultz (Urfist de Lyon): a course that invites you to reflect on what a programming language actually does to our data — and all that implies — well beyond purely technical or "utilitarian" considerations. Unfortunately, I won't be able to attend the two following sessions, which are offered in person…
✔️ Tracked down a few manuals and tutorials to get the adventure started
✔️ And, as always essential for me: some "context." In this case, the documentary available online — Python, The Documentary (Cult. Repo) — recommended by Emilien during the session. A fascinating dive into the creation of Python — born out of a failure —, its development, the governance challenges of a community that never stopped growing, its near-disappearance, and the — belated but real — inclusion of women developers, with the rise of and one of its leading figures, @mariattaMariatta 🤦🏻‍♀️ :python: . The documentary tells the story primarily from the perspective of the founders and early developers rather than the broader community — there is doubtless much more to be said —, but it is a wonderful introduction to what has become one of the most essential programming languages around.

Right. Time to open a terminal and write some actual code !

0

FR - (English follows - see comments) Apprendre Python pour mon usage de données en sciences sociales est l'un de mes objectifs actuels.
J'avais de graaaaaaaaandes ambitions pour cette semaine de relâche qui s'achève déjà (bon, on a eu une heure de moins avec le retour à l'heure d'été — ça compte un peu, non ?)
Alors…
✔️ Suivi (en ligne) l'excellente introduction de @emilienschultz dans le cadre de @urfistlyon@social.siences.re : une formation qui propose une réflexion sur ce que fait un langage informatique à nos données — avec toutes ses implications —, bien au-delà des considérations purement techniques ou "utilitaires". Je ne pourrai malheureusement pas assister aux deux séances suivantes, offertes en présence…
✔️ Récupéré quelques manuels et tutoriels pour me lancer dans l'aventure
✔️ Et, ce qui est toujours essentiel pour moi : du "contexte". En l'occurrence, le documentaire accessible en ligne : Python, The Documentary (Cult. Repo) — recommandé par Émilien lors de la formation. Une plongée fascinante dans la création de Python — né d'un échec —, son développement, les défis de gouvernance d'une communauté qui n'a cessé de s'élargir, quelques-unes de ses crises, et l'intégration — tardive mais réelle — des femmes de la tech, avec l'émergence des , et aussi l'une de ses figures phares : @mariattaMariatta 🤦🏻‍♀️ :python: . Le documentaire raconte l'histoire principalement du point de vue des fondateurs et premiers développeurs plutôt que de la communauté dans son ensemble — il reste sans doute beaucoup à dire —, mais c'est une magnifique introduction à ce qui est devenu l'un des langages informatiques incontournables.

Et maintenant... plongée dans les choses sérieuses : du python !

youtu.be/GfH4QL4VqJ0

0

I saw a TIL post on bsky today about using '/' with pathlib.Path's. Some other neat things recently added to
- itertools.batched (3.12)
- accessing re.Match groups using [] notation (faster than calling group()) (3.6)
- 1_000_000 is a legal syntax for 1000000 (3.6)
- command-line access to stdlib modules (python -m <name>)
- uuid (3.12)
- json (3.14)
- random (3.13)
- sqlite3 (3.12)
- http.server (3.4)
- f-strings (3.6)
- true multithreading (3.14)
- contextlib.chdir (3.11)

0
0
0
0
0
0
0
0

Current status of PEPs for 3.15 with two months until feature freeze:

Informational: 1 (release schedule)

Open (under consideration): 20

Accepted (may not be implemented yet): 5

Finished (done, with a stable interface): 4

Deferred (postponed pending further research or updates): 1

Rejected, Superseded, and Withdrawn: 2

Unmerged PRs: 6

peps.python.org

0
0
0
0
0

RE: mastodon.social/@fastlydevs/11

Huge thanks to @fastlydevs for 10+ years of keeping up and running! PyPI serves 800K+ users at ~100K requests/sec. With a small team behind the service, that kind of scale is only possible because of infrastructure partners who invest in the sustainability of the ecosystem.

0

I have a question about Python libraries and testing scope.

If I'm importing 'serial' in my library, and use it like the following to create a connection to a sensor:

--- start code ---

import serial

class Sensor:
def __init__(self, serial_device):

self.__serial_device = serial_device

try:
self.__connection = serial.Serial(
port=serial_device,
baudrate=9600,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
)

except serial.SerialException:
print("Could not establish serial connection to sensor")

--- end code ---

how much testing should I do around the serial connection? Just mock up a few buffers (byte streams), and see how my class handles unexpected input?

One the one hand, I want to make the library as solid as possible. On the other hand, I don't want to run tests on code I don't control (the library module). I know of the 'mock-serial' utility, but haven't used it.

The aim is to make a Python version of my Arduino library for the CozIR Ambient CO2 sensor:

codeberg.org/mjack/ambientCO2/

0
0

My partner is looking for work. I'd appreciate boosts.

He's looking to move into , but will accept short or contracts (<12 months). Location: Melbourne Australia, or remote. For a short enough contract he'd go anywhere though.

He's a senior full stack web dev (Linux/python/django/js/elm, ~12 years).

Experienced in dev ops, dev sec ops and automation (ansible, selenium, etc etc).

He has experience with OWASP ZAP, bandit and Snyk, and is part way through the PortSwigger academy.

FOSS contributions include writing a django authentication function for OWASP ZAP, making a wrapper to improve accessibility and usability for selenium (Elemental), and other bits and bobs.

He isn't on any socials, but if you want to get in touch I can share his email or signal ID (or give him yours).

He and I have been the security people for little apps without any dedicated security team, for the last decade or so. If you're in security you might have met him (or me) at conferences (Disobey, BSides, CCC, Defcon and Ruxmon), because we've been attending since we launched our own app in 2014, picking up everything we can to protect our users.

(Yep, he is aware a move to security from senior dev roles will be a step down in seniority and $. He just really likes security.)

0
3
0

I just learned that I can use `...` as placeholder for unfinished code instead of `pass` in
`...` is a built-n object, and it's called the *Ellipsis*

Example: Instead of

def func():
pass

one could use the ellipsis like

def func():
...

0

For 10+ years, Fastly has supported the @ThePSFPython Software Foundation in securing and scaling the Python Package Index (PyPI).

~100K req/sec.
500K+ projects.
98–99% cache hit rate.
Real-time purging in milliseconds.
Adaptive WAF protection against bots + account takeovers.

Proud to help keep one of the world’s most critical open source ecosystems fast, fresh, and secure.

Read more: fastly.com/customers/psf

0
0
1

Finally my actual ; although I'm not new to mastodon, I'm new to this account...

I have a background in and , and am a lead data scientist on the Integrated Data Service for the UK government.

I talk a lot about , open standards, and "data as the ".

I try to make data Findable, Accessible, Interoperable, and Reusable using and tools.

> ; however I don't have time for that flame war—there's so much cool about.

0

I built labeille to find CPython JIT crashes, but it's a "run real world test suites at scale" platform.

It also works for:
— Checking which packages pass their tests on a new CPython version
— Testing free-threaded (no-GIL) CPython compatibility
— Measuring coverage.py or memray overhead across hundreds of packages
— Comparing CPython vs PyPy performance on real code

The registry of 350+ packages with install/test commands is the core.

The most important and tedious part of labeille is the registry.

So far with 350+ PyPI packages, each with a repo URL, install and test commands, metadata about whether it has C extensions, what Python versions to skip, and whether it needs xdist disabled.

"Just run pytest" doesn't work for all packages. Some need specific test markers or editable installs. Some have tests that might hang. Some need extra dependencies that aren't in their dev requirements.

0

labeille can compare 2 test runs and show what changed and why it changed.

When it goes from PASS to CRASH, labeille looks at the package's repo. If the commit is the same, it's a CPython/JIT regression. Otherwise, it might be the package:

requests: PASS → CRASH
Repo: abc1234 → abc1234 (unchanged — likely a CPython/JIT regression)

flask: CRASH → PASS
Repo: 222bbbb → 333cccc (changed)

This allows figuring out "3 of these are JIT regressions".

I built labeille to find CPython JIT crashes, but it's a "run real world test suites at scale" platform.

It also works for:
— Checking which packages pass their tests on a new CPython version
— Testing free-threaded (no-GIL) CPython compatibility
— Measuring coverage.py or memray overhead across hundreds of packages
— Comparing CPython vs PyPy performance on real code

The registry of 350+ packages with install/test commands is the core.

0

labeille has a bisect command that binary-searches through a package's git history to find the commit that triggers a JIT crash:

labeille bisect requests --good=v2.30.0 --bad=HEAD --target-python /path/to/cpython-jit

github.com/devdanzin/labeille#

Commits that won't build get skipped automatically (like git bisect skip), revisions get a fresh venv so dependency versions don't leak, and you can filter by crash signature when a package has distinct crashes.

labeille can compare 2 test runs and show what changed and why it changed.

When it goes from PASS to CRASH, labeille looks at the package's repo. If the commit is the same, it's a CPython/JIT regression. Otherwise, it might be the package:

requests: PASS → CRASH
Repo: abc1234 → abc1234 (unchanged — likely a CPython/JIT regression)

flask: CRASH → PASS
Repo: 222bbbb → 333cccc (changed)

This allows figuring out "3 of these are JIT regressions".

0

labeille runs test suites from popular PyPI packages against a JIT-enabled CPython build and catches crashes: segfaults, assertion failures, etc.

If all of requests, flask, attrs, etc. pass their tests under the JIT, that shows the JIT is working. If one crashes, there's a bug with a reproducer. We've found one crash so far: github.com/python/cpython/issu

This requires curating a local package registry with repo URLs, install and test commands, etc.

labeille has a bisect command that binary-searches through a package's git history to find the commit that triggers a JIT crash:

labeille bisect requests --good=v2.30.0 --bad=HEAD --target-python /path/to/cpython-jit

github.com/devdanzin/labeille#

Commits that won't build get skipped automatically (like git bisect skip), revisions get a fresh venv so dependency versions don't leak, and you can filter by crash signature when a package has distinct crashes.

0

I've been working on a new Python tool: labeille. Its main purpose is to look for CPython JIT crashes by running real world test suites.

github.com/devdanzin/labeille

But it's grown a feature that might interest more people: benchmarking using PyPI packages.

How does that work?

labeille allows you to run test suites in 2 different configurations. Say, with coverage on and off, or memray on and off. Here's an example:

gist.github.com/devdanzin/6352

labeille runs test suites from popular PyPI packages against a JIT-enabled CPython build and catches crashes: segfaults, assertion failures, etc.

If all of requests, flask, attrs, etc. pass their tests under the JIT, that shows the JIT is working. If one crashes, there's a bug with a reproducer. We've found one crash so far: github.com/python/cpython/issu

This requires curating a local package registry with repo URLs, install and test commands, etc.

0

I've been working on a new Python tool: labeille. Its main purpose is to look for CPython JIT crashes by running real world test suites.

github.com/devdanzin/labeille

But it's grown a feature that might interest more people: benchmarking using PyPI packages.

How does that work?

labeille allows you to run test suites in 2 different configurations. Say, with coverage on and off, or memray on and off. Here's an example:

gist.github.com/devdanzin/6352

0
0

Debugging fail in : I had a generator function:

def them():
for thing in some_things():
yield thing

But I wanted to quickly try changing it to use a helper that returned a list, so I just added that code at the top:

def them():
return them_from_somewhere_else()
for thing in some_things():
yield thing

Took me a while to figure out why `list(them())` was always an empty list.

0