I have a side-project that is basically a React frontend, a Django API server and a Node universal React renderer. The killer feature is its Elasticsearch database that searches almost 2.5M large texts and 200K named objects.
At Sentry we’re big users of open source tooling. Specifically, our day-to-day engineering workflows are built on top of GitHub, Travis CI, and a number of other supporting tools.
In November 2017 there was a lovely gathering of independent business folks in Portland, Oregon called DazzleCon.
Leading in to that lovely event there was an introduction thread where everyone explained their business.
I use this sometimes to get insight into how long some view functions take. Perhaps you find it useful too:
I’ve been going to professional events for a number of years,
and one of the trickiest dynamics I have seen is that most events develop an “insiders” group who has been going for a long time.
Last week I released
django-memoize-function which is a library for Django developers to more conveniently use caching in function calls. This is a quick blog post to demonstrate that with an example.
Released a new package today: django-cache-memoize
It’s actually quite simple; a Python memoize function that uses Django’s cache plus the added trick that you can invalidate the cache my doing the same function call with the same parameters if you just add
.invalidate to your function.
As of PostgreSQL 9.5 we have UPSERT support. Technically, it’s
ON CONFLICT, but it’s basically a way to execute an
UPDATE statement in case the
INSERT triggers a conflict on some column value.
Evennia, the Python MUD/MUSH/MU* creation library participates in the Hacktoberfest 2017 (sign up on that page)! Hacktoberfest is open for all open-source projects like ours.
Now that Evennia’s devel branch (what will become Evennia 0.7) is slowly approaching completion, I thought I’d try to document an aspect of it that probably took me the longest to figure out.
I did another couple of benchmarks of different cache backends in Django. This is an extension/update on Fastest cache backend possible for Django published a couple of months ago.
I firmly believe that conferences can provide a lot of value for people in an industry.
Conferences allow people to create a network,
which helps them feel integrated in a community and profession.
If you’re reading this you’re probably familiar with how, in django-pipeline, you define bundles of static files to be combined and served.
Outlining my plan for iterating on Channels’ design, and what the future might hold for both Django and Python in general.
It’s been around three years since I came up with the current Channels design –
that of pushing everything over a networked “channel layer” and strictly
separating protocol handling and business logic – and while it’s generally working
well for people, I have this feeling it can be improved, and I’ve been thinking about how for the past few months.