Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Should I use Django Admin in production for a pharmacy website?
I am building my first production-ready pharmacy website using Django. The backend will be used to manage products, inventory, orders, and users. I am considering using Django’s built-in Admin interface for internal management instead of building a custom dashboard from scratch. My questions are: - Is Django Admin suitable for production use in this case? - Is it flexible enough to customize (permissions, UI, workflows)? - Is this a common and recommended practice for real-world projects? The system may later be extended to support a mobile or desktop application. -
Navbar opacity drops and disappears when clicking links rapidly using HTMX in Django
I am building a Django website where I use HTMX to load partial HTML content when clicking navbar links (instead of full page reloads). The navbar itself is not supposed to change, only the main content area is swapped using HTMX. However, I am encountering a strange issue: When I click a navbar link normally, it usually works fine If I click multiple links quickly, the navbar’s opacity drops and it eventually disappears The issue seems cumulative, as repeated or rapid clicks make the problem more noticeable I am not explicitly changing the navbar’s opacity in JavaScript, and the navbar is not part of the hx-target being swapped. What I’ve checked The navbar HTML is outside the HTMX swap target No JavaScript is manually modifying navbar styles The issue only happens when using HTMX navigation It does not happen with full page reloads <nav class="fixed top-0 left-0 w-full z-50 px-6 py-4 bg-transparent"> <div class="max-w-7xl mx-auto flex justify-between items-center relative"> <!-- Logo --> <div class="flex items-center gap-2 fade-left"> <div class="w-8 h-8 rounded-lg bg-linear-to-br from-red-500 to-pink-500 flex items-center justify-center"> <i data-lucide="languages" class="w-5 h-5 text-white"></i> </div> <a href="{% url 'home' %}" hx-get="{% url 'home' %}" hx-target="#page-content" hx-swap="innerHTML" hx-push-url="true" class="text-xl font-bold text-white" > NativeTube … -
Click event works only for first element in loop in a Django template
I am looping around a cart dictionary that is in the session and I have defined a product for all the items. When each cart item button is clicked, they are sent via ajax, but only the first item is sent, even though each item has a product id specified. {% for item in cart %} <div class="items flex gap-4 bg-white rounded-3xl border p-4 items-center" data-product-id='{{ item.product.id }}'> #somecode <button class="increase-item"> + </button> <button class="decrease-item"> - </button> </div> {% endfor %} js $(document).ready(function(){ var increaseBtn = document.querySelector(".increase-item"); var decreaseBtn = document.querySelector(".decrease-item"); increaseBtn.addEventListener("click", function(){ var itemId = $(this).closest('.items').data('product-id'); updateQuantity(itemId, "increase"); }) decreaseBtn.addEventListener("click", function(){ var itemId = $(this).closest('.items').data('product-id'); updateQuantity(itemId, "decrease"); }) function updateQuantity(id, action){ $.ajax({ type: 'POST', url: '{% url "cart:update_quantity" %}', data: {'id': id, 'action': action, 'csrfmiddlewaretoken': '{{ csrf_token}}'}, success: function(data){ $("#total-items").text(data.total_items); }, }) } }) -
Best practice for streaming large MP4 videos in browser:Django backend vs NGINX (moov atom, ranges)
I’m working on a web dashboard where users need to play large MP4 videos (around 300–400 MB) directly in the browser and switch between videos quickly (different users/patients). I noticed that if an MP4 doesn’t have the moov atom at the beginning, browsers fail to play it (while VLC works). After fixing this using ffmpeg (faststart), playback improves. Currently, I’m streaming videos through a backend server (Django) using HTTP range requests. It works for light usage, but fast seeking and rapid video switching feel unreliable. My question is: Is it generally recommended to stream large videos directly from an application backend, or is the industry best practice to use a web server like NGINX (or CDN) for video delivery, with the backend only handling auth and returning the video URL? -
HTML not executing code from external JavaScript file
I wrote ajax for django to add products to cart, I use internal js which runs but when I put that code in external js file it doesn't work, Although I received the ID and csrf_token. html <main data-item-id="{{ product.id }}" data-csrf-token="{{ csrf_token }}"> javascript $(document).ready(function(){ var btn = document.querySelector(".add-to-cart"); var productId = document.querySelector('main').getAttribute('data-item-id'); var csrfToken = document.querySelector('main').getAttribute('data-csrf-token'); btn.addEventListener("click", function(){ $.ajax({ type: 'POST', url: '{% url "cart:add_to_cart"%}', data: {'productId': productId}, headers: {'X-CSRFToken': csrfToken}, success: function(data){ $("#total-items").text(data.total_items); } }) }) }) also the script and jquery are defined. -
"django.db.utils.NutSupportedError: extension 'postgis' not available" error being thrown despite having postgis installed in Python virtualenv
I am attempting to migrate model changes I have made to my codebase to a PostgreSQL server (PostgreSQL 18.1 on x86_64-linux, compiled by gcc-11.4.0, 64-bit), but every time I do this error is thrown. Specifically, the migrate sequence throws the error when reading the file /home/[username]/[project]/[project_directory]/newenv/lib/python3.10/site-packages/django/db/backends/utils.py on return self.cursor.execute(sql). (Note: the system environment is Linux DESKTOP-7C9U6H4 6.6.87.2-microsoft-standard-WSL2). The function called looks like this: def _execute(self, sql, params, *ignored_wrapper_args): # Raise a warning during app initialization (stored_app_configs is only # ever set during testing). if not apps.ready and not apps.stored_app_configs: warnings.warn(self.APPS_NOT_READY_WARNING_MSG, category=RuntimeWarning) self.db.validate_no_broken_transaction() with self.db.wrap_database_errors: if params is None: # params default might be backend specific. return self.cursor.execute(sql) else: return self.cursor.execute(sql, params) In my project settings my Databases dictionary looks like the following: DATABASES = { 'default': { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'NAME': '[name]', 'USER': '[user]', 'PASSWORD': '', 'HOST': '127.0.0.1', # localhost 'PORT': '5432', } } When I try to enter the command CREATE extension postgis; on the database side (the server is located at /usr/local/pgsql/data), I get the following error thrown: ERROR: extension "postgis" is not available. Hint: The extension must first be installed on the system where PostgreSQL is running. I am unclear why the extension is not available because I … -
Excel file download via HTMX request results in corrupted or blank file
I am trying to download an Excel file (.xlsx) using an HTMX request in a Django application. The request is triggered correctly, and the response is returned from the server, but the downloaded Excel file is either corrupted or opens as a blank file. -
What is the recommended frontend approach for beginners using Django?
As a beginner in Django, I am confused about which frontend technology to use for handling dynamic behavior. Should I start with: plain JavaScript, AJAX, or a library like jQuery? Which one has a smoother learning curve and better long-term benefits? -
I have 3 different account types in a Django project. Should I use 1 or 3 apps?
I'm new to Django. I've been making a project with 3 types of accounts: Schools, professors and users. But I feel like some things would be too "repetitive" like some model fields and views the views. I thought that having 3 apps would be too much, so I decided to delete them and create a single app as "accounts" where I manage schools, professors and users as 3 separate classes in models.py. Is this a good option? I'm kind of not sure -
i cant install taiwind css in my django project [duplicate]
(venv) PS C:\poultry_link\poultrylink> npx tailwind css init npm error could not determine executable to run npm error A complete log of this run can be found in: C:\Users\naallah\AppData\Local\npm-cache\_logs\2026-01-02T11_39_06_488Z-debug-0.log -
django model with foreign key onto settings.AUTH_USER_MODEL fails when app is incorporated into other app using postgres
I have a survey app that works fine as a standalone app with no complaints (using sqlite). But when I incorporate the survey app into another that is using postgres as a database, it fails to run the survey app's migrations, complaining that the owner_id (foreign_key to settings.AUTH_USER_MODEL) of the survey table contains null values. django complains that `column "owner_id" of relation "djf_surveys_survey" contains null values` the postgres statement is : ```postgres STATEMENT: ALTER TABLE "djf_surveys_survey" ADD COLUMN "owner_id" bigint NOT NULL CONSTRAINT "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" REFERENCES "users_user"("id") DEFERRABLE INITIALLY DEFERRED; SET CONSTRAINTS "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" IMMEDIATE ``` My model's foreign key line is: ``` owner = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) ``` Its a simple foreignkey relationship. I haven't ever seen this before, and a reasonable google search has shown nothing that seems relevant. I don't understand why the postgres sql would create the column as deferrable and then add a constraint that makes it IMMEDIATE... The complete stack trace is as follows... (MTIA...:) ```python mind_survey_app_local_django | Running migrations: mind_survey_app_local_postgres | 2025-12-31 15:37:17.848 UTC [37] ERROR: column "owner_id" of relation "djf_surveys_survey" contains null values mind_survey_app_local_postgres | 2025-12-31 15:37:17.848 UTC [37] STATEMENT: ALTER TABLE "djf_surveys_survey" ADD COLUMN "owner_id" bigint NOT NULL CONSTRAINT "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" REFERENCES "users_user"("id") DEFERRABLE INITIALLY … -
django model with foreign key onto settings.AUTH_USER_MODEL fails when app is incorporated into other app using postgres
I have a survey app that works fine as a standalone app with no complaints (using sqlite). But when I incorporate the survey app into another that is using postgres as a database, it fails to run the survey app's migrations, complaining that the owner_id (foreign_key to settings.AUTH_USER_MODEL) of the survey table contains null values. django complains that column "owner_id" of relation "djf_surveys_survey" contains null values the postgres statement is : STATEMENT: ALTER TABLE "djf_surveys_survey" ADD COLUMN "owner_id" bigint NOT NULL CONSTRAINT "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" REFERENCES "users_user"("id") DEFERRABLE INITIALLY DEFERRED; SET CONSTRAINTS "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" IMMEDIATE My model's foreign key line is: owner = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) Its a simple foreignkey relationship. I haven't ever seen this before, and a reasonable google search has shown nothing that seems relevant. I don't understand why the postgres sql would create the column as deferrable and then add a constraint that makes it IMMEDIATE... The complete stack trace is as follows... (MTIA...:) mind_survey_app_local_django | Running migrations: mind_survey_app_local_postgres | 2025-12-31 15:37:17.848 UTC [37] ERROR: column "owner_id" of relation "djf_surveys_survey" contains null values mind_survey_app_local_postgres | 2025-12-31 15:37:17.848 UTC [37] STATEMENT: ALTER TABLE "djf_surveys_survey" ADD COLUMN "owner_id" bigint NOT NULL CONSTRAINT "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" REFERENCES "users_user"("id") DEFERRABLE INITIALLY DEFERRED; SET CONSTRAINTS "djf_surveys_survey_owner_id_f1d8e19a_fk_users_user_id" IMMEDIATE … -
Using django-storages S3 implementation for Minio with s3v4 signature
I have "successfully" set up django-storages to work with our self hosted Minio instance. This is the settings for the setup: STORAGES = { "default": { "BACKEND": "storages.backends.s3.S3Storage", "OPTIONS": { "endpoint_url": os.getenv("MINIO_ENDPOINT_URL"), "access_key": os.getenv("MINIO_ACCESS_KEY"), "secret_key": os.getenv("MINIO_SECRET_KEY"), "bucket_name": os.getenv("MINIO_BUCKET_NAME"), "region_name": os.getenv("MINIO_REGION_NAME"), "signature_version": os.getenv("MINIO_SIGNATURE_VERSION", "s3v4"), }, }, "staticfiles": { "BACKEND": "django.contrib.staticfiles.storage.StaticFilesStorage", }, } The region is eu-central-1 which does support the v4 signature for S3. Everything is set up correctly, the endpoint, keys, bucket name, and correctly read from .env when printed out in settings. Now, this completely works when trying to get files, all the files are fetched normally, I can see them in the console, the generated signed URLs work, all is fine. The issue comes when trying to POST/upload a multipart/form-data endpoint that contains a file. When I attempt to do so through the swagger UI, I get the following error: ClientError at /files/uploaded-files/ An error occurred (XAmzContentSHA256Mismatch) when calling the PutObject operation: The provided 'x-amz-content-sha256' header does not match what was computed. I have deduced the issue is with the signature version, because once I switch the signature to s3 from s3v4, both the upload and fetching of the files works correctly. But from what I understood, I … -
Does a ModelSerializer catch "django.core.exceptions.ValidationError"s and turn them to an HTTP response with a 400 status code?
Let's say this is my model: from django.core.exceptions import ValidationError class MyModel(models.Model): value = models.CharField(max_length=255) def clean(self): if self.value == "bad": raise ValidationError("bad value") def save(self): self.full_clean() return super().save() And I have this serializer: from rest_framework.serializers import ModelSerializer class MyModelSerializer(ModelSerializer): class Meta: model = MyModel fields = ["value"] And this was my viewset from rest_framework.viewsets import ModelViewSet class MyModelViewSet(ModelViewSet): queryset = MyModel.objects.all() serializer_class = MyModelSerializer My question is: what should happen when a bad value is submitted for value? Should it be caught by DRF and turned into an HTTP response with a 400 status code? Or should it be treated like a regular exception and crash the server? I'm asking this because when I submit invalid data in DRF's browsable API, instead of catching the ValidationError and returning an HTTP response, DRF stops the application entirely. Is that normal, or am I doing something wrong? I don't want to repeat my validation logic again in the serializer, so what's the correct approach here? -
Django ManyToMany table missing even though migrations are applied (Docker + Postgres)
Problem I am working on a Django project using a PostgreSQL container in Docker. I added a ManyToManyField to one of my models. I keep migrations in .gitignore because my local and production databases are different, so I only pushed the model changes to GitHub. On my server, I pulled the code and ran docker compose up, which automatically runs makemigrations and migrate. There were no errors or warnings, and the generated migration file correctly includes the ManyToMany Field. Django shows the migration as applied. However, when I try to access the ManyToMany relation at runtime, I get a database error saying the join table does not exist. Environment Django PostgreSQL Docker Model class Contact(models.Model): ... class CallCampaign(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) error = models.TextField(blank=True, null=True) company = models.ForeignKey(Company, on_delete=models.CASCADE) selected_contacts = models.ManyToManyField('Contact', blank=True) ... Migration The migration file includes the ManyToManyField: migrations.CreateModel( name='CallCampaign', fields=[ ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('error', models.TextField(blank=True, null=True)) ('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='companies.company')), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ('selected_contacts', models.ManyToManyField(blank=True, to='contacts.contact')), ], ), The migration is marked as applied in the database. What happens at runtime >>> from contacts.models import CallCampaign >>> c = CallCampaign.objects.first() >>> c.selected_contacts <ManyRelatedManager object at 0x...> >>> c.selected_contacts.all() Error: django.db.utils.ProgrammingError: relation "contacts_callcampaign_selected_contacts" does not … -
Does Django inlineformset allow for editing & submitting related model data?
I'm attempting to display a form for a tennis event (location, time/date) and each of its participants (name, status). I was recommended to use an inlineformset, which I assume would allow editing of all those fields with one submit button. What I'm getting: the participants fields are all editable fields, but the event fields are not (they're just displayed): Am I correct in assuming that an inlineformset should allow this approach? Models: class Event(models.Model): date_time = models.DateTimeField(default=datetime.now(), auto_now=False, auto_now_add=False, null=False, blank=False) # location = models.CharField(max_length=50, null=False, blank=False, help_text="Where the event will happen, e.g. location and court") roster = models.ForeignKey(Roster, related_name='events', on_delete=models.CASCADE, default=None, null=True, help_text="The event's roster") comment = models.CharField(max_length=400, null=True, blank=True, help_text="A comment about an event, e.g. time / court constraints etc.") class Participant(models.Model): member_name = models.CharField(max_length=50, null=False, blank=False) event = models.ForeignKey(Event, on_delete=models.CASCADE, null=False, related_name="participants") status = models.CharField(max_length=50, choices=StatusChoices, default="Unknown") comment = models.CharField(max_length=400, blank=True, null=True) View: def event_edit(request, event_id): event = get_object_or_404(Event, pk=event_id) roster = event.roster ParticipantInlineFormSet = inlineformset_factory(Event, Participant, fields=["id","member_name", "status","comment"], extra=0) if request.method == 'POST': formset = ParticipantInlineFormSet(request.POST, request.FILES, instance=event) if formset.is_valid(): formset.save() return render(request,'event_list.html',{"roster": roster, 'events': roster.events.all()}) else: formset = ParticipantInlineFormSet(instance=event) print(formset) return render(request, "event_edit_2.html", {"formset": formset, "event": event}) Template: <form method="post"> {% csrf_token %} {{ formset.management_form … -
Django ManyToMany self relationship with through model — user_from and user_to reversed behavior
I added following and followers by using intermediate model, it`s ok but does not correctly specify objects in the intermediate model. I expect the user who follows must to be in the user_from field and the one who followed to be in the user_to. but there are placed opposite. for example: class User(AbstractUser): #somefields following = models.ManyToManyField("self", related_name='followers', through="Contact", through_fields=('user_to', 'user_from'), symmetrical=False, blank=True) intermediate model: class Contact(models.Model): user_from = models.ForeignKey(User, related_name="rel_from_set", on_delete=models.CASCADE) user_to = models.ForeignKey(User, related_name="rel_to_set", on_delete=models.CASCADE) created = models.DateTimeField(auto_now_add=True) class Meta: ordering = ['-created'] indexes = [models.Index(fields=['created'])] def __str__(self): return f"{self.user_from} follows {self.user_to}" -
unsupported operand type(s) for +: 'NoneType' and 'datetime.timedelta'
I'm getting the above error while executing the following method in my django rest framework project (Django==5.0.4, djangorestframework==3.16.1). def activate( self, subscription_date=None, mark_transaction_paid=True, no_multiple_subscription=False, del_multiple_subscription=False, ): if no_multiple_subscription: self.deactivate_previous_subscriptions(del_multiple_subscription=del_multiple_subscription) current_date = subscription_date or timezone.now() next_billing_date = self.plan_cost.next_billing_datetime(current_date) self.active = True self.cancelled = False self.due = False self.date_billing_start = current_date self.date_billing_end = next_billing_date + timedelta(days=self.plan_cost.plan.grace_period) self.date_billing_next = next_billing_date self._add_user_to_group() if mark_transaction_paid: self.transactions.update(paid=True) self.save() The trace points to this line in the above method: self.date_billing_end = next_billing_date + timedelta(days=self.plan_cost.plan.grace_period) What I've tried : changed current_date = subscription_date or timezone.now() to current_date = subscription_date or datetime.now() I'm currently setting up the following package for subscription plan in my Django Saas project https://github.com/ydaniels/drf-django-flexible-subscriptions/tree/master The file in the above project where the issue is https://github.com/ydaniels/drf-django-flexible-subscriptions/blob/master/subscriptions_api/base_models.py -
LinkedIn organizationPageStatistics API returns 400 PARAM_INVALID for timeIntervals (REST.li 2.0)
I am calling the LinkedIn organizationPageStatistics endpoint using the REST.li 2.0 API, but I keep getting a 400 PARAM_INVALID error related to the timeIntervals parameter. According to the official documentation (li-lms-2025-11), timeIntervals is an object, not a list. Error response { "errorDetailType": "com.linkedin.common.error.BadRequest", "message": "Invalid param. Please see errorDetails for more information.", "errorDetails": { "inputErrors": [ { "description": "Invalid value for param; wrong type or other syntax error", "input": { "inputPath": { "fieldPath": "timeIntervals" } }, "code": "PARAM_INVALID" } ] }, "status": 400 } Code def fetch_linkedin_analytics_and_save(user, account): organization_urn = f"urn:li:organization:{account.page_id}" access_token = account.access_token start_ms = int( (datetime.now() - timedelta(days=90)).timestamp() * 1000) end_ms = int(datetime.now().timestamp() * 1000) base_url = "https://api.linkedin.com/rest/organizationPageStatistics" LINKEDIN_API_VERSION = os.environ.get("LINKEDIN_API_VERSION", "202511") headers = { "Authorization": f"Bearer {access_token}", "Linkedin-Version": LINKEDIN_API_VERSION, "X-Restli-Protocol-Version": "2.0.0", "Content-Type": "application/json" } time_intervals_str = f'(timeGranularityType:DAY,timeRange:(start:{start_ms},end:{end_ms}))', params = { "q": "organization", "organization": organization_urn, 'timeIntervals.timeGranularityType': 'DAY', 'timeIntervals.timeRange.start': start_ms, 'timeIntervals.timeRange.end': end_ms } print(params) response = requests.get(base_url, headers=headers, params=params) if response.status_code != 200: print(response.text) Attempt 1 – REST.li object string time_intervals_str = ( f"(timeGranularityType:DAY," f"timeRange:(start:{start_ms},end:{end_ms}))" ) params = { "q": "organization", "organization": organization_urn, "timeIntervals": time_intervals_str } Attempt 2 – Flattened parameters params = { "q": "organization", "organization": organization_urn, "timeIntervals.timeGranularityType": "DAY", "timeIntervals.timeRange.start": start_ms, "timeIntervals.timeRange.end": end_ms } Your help will be … -
Which one is best for 2026 Python developers , Python Data science or Python Django?
Only Experts Can Answer this question. Which one is best for 2026 Python developers , Python Data science or Python Django? From both of these which one is easy to learn and high paid? -
Research survey: Evaluating Code First vs Database First in different ORMs
I am conducting an academic research study focused on comparing Code First (CF) and Database First (DBF) approaches in different ORMs. The goal of this survey is to collect objective, experience-based input from developers who have worked in real-world projects. The responses will be used to analyze how CF and DBF are implemented in practice, based on clearly defined technical and organizational criteria. The comparison relies on a structured set of criteria covering key aspects of database usage in modern Django applications — including schema design, migrations and change management, performance considerations, version control, and team collaboration. These criteria are intended not only to describe theoretical differences, but to provide a practical framework for objectively evaluating both approaches in real development scenarios. The same criteria are applied across multiple ORM environments (Entity Framework Core, Hibernate, Django ORM, and Doctrine) in order to compare how different ORMs implement Code First and Database First in practice. If you have experience working with any of these ORMs here are the different survey links: Django: https://docs.google.com/forms/d/e/1FAIpQLSfFvpzjFii9NFZxbaUTIGZEaY0WY4jXty4Erv-hKZPE1ZESyA/viewform?usp=dialog EF Core: https://docs.google.com/forms/d/e/1FAIpQLSdGkQuwa4pxs_3f9f2u9Af64wqy_zeLP2xhhcwKxHnaQdWLmQ/viewform?usp=dialog Hibernate: https://docs.google.com/forms/d/e/1FAIpQLSdU51vOlhwxLFXA7Rp24pdYO-gRwZgm02qqIWaGaEz10MuwQg/viewform?usp=dialog Doctrine: https://docs.google.com/forms/d/e/1FAIpQLSeWwuI1PSFfN3tNC2yYXjw787zfoXOeXKehC1kce3ondiK8NQ/viewform?usp=dialog Thank you for contributing; comments, corrections, and practical insights are very welcome. -
Django DRF JWT Authentication credentials were not provided on UpdateAPIView even for is_staff user
I'm implementing JWT authentication using Django REST Framework and djangorestframework-simplejwt in my project. I have an endpoint for updating a category. What I tried Verified that the JWT token is valid. Confirmed that the user is is_staff=True and is_superuser=True. Tried both PATCH and PUT methods. Question Why am I getting the error message: Authentication credentials were not provided. on this UpdateAPIView, even though JWT is configured and the user is admin? Is there something specific about UpdateAPIView or the way permissions are checked that I might be missing? Imports from rest_framework import generics from rest_framework.permissions import IsAdminUser from drf_spectacular.utils import extend_schema from .serializers import CategorySerializer from .models import Category from .limiter import AdminCategoryThrottle View @extend_schema( tags=["categories"], summary="Update category (admin only)", responses={201: CategorySerializer} ) class UpdateCategoryView(generics.UpdateAPIView): """ This endpoint allows an admin user to update a category. It is protected and only admin users can access it. """ serializer_class = CategorySerializer permission_classes = [IsAdminUser] throttle_classes = [AdminCategoryThrottle] queryset = Category.objects.all() lookup_field = "slug" Serializer from rest_framework import serializers from .models import Category class CategorySerializer(serializers.ModelSerializer): class Meta: model = Category fields = ["name", "is_active"] read_only_fields = ["slug", "created_at", "updated_at"] def validate_name(self, value): if Category.objects.filter(name__iexact=value).exists(): raise serializers.ValidationError("Category already exists.") return value URL path( … -
What does _db used for in a django Model Manager
While trying to create a custom QuerySet and a custom Manager for a django Model I stumbled upon the documentation sections 1 and 2 that use the manager's _db property and i would like to understand what does this property do and why is it necessary to use it when overriding the get_queryset method. I read the code in the django repo and did some django db queries locally with different databases and printed the _db property but it seems it is always None. So, what is the use for it and why is it important to handle it as per doc 2? -
Gunicorn (Uvicorn Worker) continues processing requests after Heroku 30s timeout
I’m running a Django (ASGI) app on Heroku using Gunicorn with the Uvicorn worker to support WebSockets. Heroku has a hard 30-second request timeout. When a request exceeds 30 seconds, Heroku closes the connection as expected. Problem: Even after the Heroku timeout, Gunicorn/Uvicorn continues executing the request in the background, which wastes resources. Gunicorn command: newrelic-admin run-program gunicorn --workers 4 --worker-connections 200 --timeout 30 --max-requests 1000 --max-requests-jitter 500 --bind 0.0.0.0:8000 asgi:application Questions Why does Gunicorn/Uvicorn keep running the request after Heroku times out? Is there a way to cancel the request when the client disconnects? Should this be handled in Django (async cancellation/middleware), or via Gunicorn settings? Any help is appreciated. -
Correct implementation of Wright–Malécot inbreeding coefficient using shortest paths (Python/Django)
I am implementing the Wright inbreeding coefficient (F) for individuals in a pigeon pedigree (Django application, relational database). The current implementation builds the pedigree using a breadth-first search (BFS) up to a fixed number of generations and computes F using the classical formula: 𝐹=∑[(1/2)^(𝑛1+𝑛2+1)] (1+𝐹𝐴) where: n1 is the number of generations between the sire and the common ancestor, n2 is the number of generations between the dam and the common ancestor, 𝐹𝐴 is the inbreeding coefficient of the common ancestor. To reduce complexity, the algorithm intentionally: considers only the shortest path from sire and dam to each common ancestor, does not enumerate all possible loops in the pedigree. Although this approach works for simple pedigrees, the computed F values are incorrect for more complex cases involving: multiple independent loops, repeated ancestors through different paths, deeper or overlapping inbreeding structures. Specifically: the algorithm underestimates F when a common ancestor appears via more than one valid loop, contributions from additional paths are ignored, recursive calculation of 𝐹𝐴 propagates the same limitation. I am looking for clarification on whether: the Wright–Malécot coefficient can be correctly computed using shortest paths only, or all valid ancestral loops must be explicitly enumerated (or otherwise accounted …