Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How do you deal with permission management when using Elasticsearch indexes?
I am using django-guardian for per-object permission management and django-elasticsearch-dsl for quicker queries across our data. It's pretty straightforward for public lists, but I am having difficulties designing a scalable permission management, so that the filtered list would show only those items that the current user request.user has access to view and change. Some solutions suggested by AI: Get a list of uuids that the user has access to, and then filter items in elasticsearch by those uuids (not very scalable). Post-process the public results with django-guardian API functions - however that takes too long for entries with tens of thousands of results (there is a possibility to skip pagination and process only the first page, but that's not preferable). Add a list of user ids and group ids who can view, edit, and delete items to the item index and check the current user's id and group ids against those fields. Create an index for the User model with all viewable, editable, and deletable items by their uuids and then do terms_lookup in that index to filter list of items in question by the uuids the user can access. All those approaches are questionable to me, when I am … -
"Internal Server Error" when sending email via Django using DigitalOcean
When trying to send an email from a Django production setup (using gunicorn) on a Digitalocean droplet, I get "Internal Server Error" on the browser, and gunicorn logs this error: … File "/usr/lib/python3.13/smtplib.py", line 255, in __init__ (code, msg) = self.connect(host, port) ~~~~~~~~~~~~^^^^^^^^^^^^ File "/usr/lib/python3.13/smtplib.py", line 341, in connect self.sock = self._get_socket(host, port, self.timeout) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.13/smtplib.py", line 312, in _get_socket return socket.create_connection((host, port), timeout, ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^ self.source_address) ^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.13/socket.py", line 849, in create_connection sock.connect(sa) ~~~~~~~~~~~~^^^^ File "/usr/lib/python3.13/site-packages/gunicorn/workers/base.py", line 204, in handle_abort sys.exit(1) ~~~~~~~~^^^ SystemExit: 1 This used to work without issues last year. -
How to use DRF serializer fields as django-filter filter fields?
I’m working with Django REST Framework and django-filter to implement API filtering. I created custom serializer fields (for example, a JalaliDateField that converts between Jalali and Gregorian dates, and applies Django’s timezone settings). I expected that I could just pass these serializer fields into a django_filters.Filter by setting field_class, but it turns out Filter.field_class is only compatible with django.forms.Field, even when using django_filters.rest_framework. So my question is: Is there a clean way to make django-filters work directly with DRF serializer fields? What I tried Naively plugging in DRF serializer fields: class JalaliDateFilter(django_filters.Filter): field_class = MyCustomJalaliDateSerializerField This fails, since django-filters expects a forms.Field, not a DRF serializers.Field. Proposed solution #1: Write a wrapper that adapts DRF fields into Django form fields Here’s a minimal sketch: from django import forms from rest_framework import serializers class DRFFieldFormWrapper(forms.Field): """ Wrap a DRF serializer field so it can behave like a Django form field. """ def __init__(self, drf_field: serializers.Field, *args, **kwargs): self.drf_field = drf_field kwargs.setdefault("required", drf_field.required) kwargs.setdefault("label", getattr(drf_field, "label", None)) super().__init__(*args, **kwargs) def to_python(self, value): if value in self.empty_values: return None return self.drf_field.run_validation(value) def prepare_value(self, value): return self.drf_field.to_representation(value) Then, a custom filter: import django_filters class DRFFieldFilter(django_filters.Filter): def __init__(self, *args, drf_field=None, **kwargs): if drf_field is None: … -
SSL Certificate error on SMTP Django DRF App
I have a Django DRF Backend that works just OK when using EMAIL_BACKEND = "django.core.mail.backends.locmem.EmailBackend". But then, switching to SMTP powered by Google as: EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend" EMAIL_HOST = "smtp.gmail.com" EMAIL_PORT = 587 EMAIL_USE_TLS = True EMAIL_HOST_USER = config("GMAIL_APP_HOST_USER") EMAIL_HOST_PASSWORD = config("GMAIL_APP_HOST_PASSWORD") DEFAULT_FROM_EMAIL = "TestApp" ACCOUNT_EMAIL_SUBJECT_PREFIX = "" I get a [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Basic Constraints of CA cert not marked critical This is being tested in a Windows 11 PC. Error details with Traceback: Django Version: 5.2.6 Python Version: 3.13.7 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.sites', 'django.contrib.staticfiles', 'rest_framework', 'rest_framework.authtoken', 'rest_framework_simplejwt', 'allauth', 'allauth.account', 'allauth.headless', 'allauth.socialaccount', 'allauth.socialaccount.providers.google', 'dj_rest_auth', 'dj_rest_auth.registration', 'corsheaders', 'authentication.apps.AuthenticationConfig'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'allauth.account.middleware.AccountMiddleware', 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware'] Traceback (most recent call last): File "C:\dev\myProject\venv\Lib\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\django\views\decorators\csrf.py", line 65, in _view_wrapper return view_func(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\django\views\generic\base.py", line 105, in view return self.dispatch(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\django\utils\decorators.py", line 48, in _wrapper return bound_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\django\views\decorators\debug.py", line 143, in sensitive_post_parameters_wrapper return view(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\dj_rest_auth\registration\views.py", line 47, in dispatch return super().dispatch(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\myProject\venv\Lib\site-packages\rest_framework\views.py", line 515, … -
Discussion About Architecture: ROS2-Django-Webinterface
I am currently building a cobot/robot arm control interface and workflow planning tool. I am currently contemplating all my architecture choices so far, because I feel like the architecture is not scaling well and could fail under large loads. So I need to ask the community... General Architecture Overview The whole architecture is what I would call microservice-based (correct me if I am wrong). I have multiple standalone components (standalone means containerized in this case and also with its own area of functionality. E.g. camera controller, robot controller, gripper controller etc.). All of these components use ROS2 for communication and are implemented in their respective language, mostly C++ and Python. Then there is the Core Application which is a backend for connecting all information and managing the components. This is kind of the brain. This brain also has a UI. For the Core/UI stack I chose Python Django + ReactJS. I chose this stack because: I am fluent in Python and React, fast Prototyping for the first Prototype, Direct ROS2 Integration in Python, Django has a good ORM and supports async operations so that I can connect ROS2 (as a bridge for the UI kind of) and store data through … -
Django WeasyPrint high memory usage with large datasets
I am using WeasyPrint in Django to generate a PDF. However, when processing around 11,000 records, it consumes all available resources allocated to the Kubernetes pod. As a result, the pod restarts, and I never receive the generated PDF via email. Are there: Any lightweight PDF libraries that can handle generating PDFs for thousands of records more efficiently? Any optimization techniques in WeasyPrint (or in general) to reduce resource usage and generate the PDF successfully? -
Django model with FK to learner app model Group is displaying options from user admin Group
I have the following models: learner app class Group(models.Model): short_name = models.CharField(max_length=50) # company acronym slug = models.SlugField(default="prepopulated_do_not_enter_text") contract = models.ForeignKey(Contract, on_delete=models.CASCADE) course = models.ForeignKey(Course, on_delete=models.CASCADE) start_date = models.DateField() end_date = models.DateField() notes = models.TextField(blank=True, null=True) class Meta: ordering = ["short_name"] unique_together = ( "short_name", "contract", ) management app I've set up an Invoice model: class Invoice(models.Model): staff = models.ForeignKey(Staff, on_delete=models.RESTRICT) group = models.ForeignKey(Group, on_delete=models.RESTRICT) date = models.DateField() amount = models.DecimalField(max_digits=7, decimal_places=2) note = models.CharField(max_length=500, null=True, blank=True) When I try to add an invoice instead of the learner groups I'm being offered the user admin Group options: Can anyone help with what I'm doing wrong. I have the learner group as a FK in other models without issue. -
Django Unit Test - using factory_boy build() on a Model with Many-To-Many relationship
I’m working on writing unit tests for a DRF project using pytest and factory_boy. I’m running into issues with many-to-many relationships. Specifically, when I try to use .build() in my unit tests, DRF attempts to access the M2M field which requires a saved object, leading to errors. tests_serializers.py def test_serialize_quality_valid_data(self): user = UserFactory.build() quality = QualityFactory.build(created_by=user) serializer = QualitySerializer(quality) data = serializer.data assert data["num"] == quality.num error: FAILED quality/tests/tests_serializers.py::TestQualitySerializer::test_serialize_quality_valid_data - ValueError: "<Quality: Quality object (None)>" needs to have a value for field "id" before this many-to-many relationship can be used. model.py class QualityTag(ExportModelOperationsMixin("quality_tag"), models.Model): name = models.CharField(max_length=64, unique=True) description = models.TextField() class Quality(ExportModelOperationsMixin("quality"), models.Model): num = models.IntegerField() title = models.CharField(max_length=64) ... tags = models.ManyToManyField(QualityTag, related_name="qualities", blank=True) factories.py class QualityTagFactory(DjangoModelFactory): class Meta: model = QualityTag name = factory.Sequence(lambda n: f"Quality Tag {n}") class QualityFactory(factory.django.DjangoModelFactory): class Meta: model = Quality num = factory.Faker("random_int", min=1, max=999) @factory.post_generation def tags(self, create, extracted, **kwargs): if not create: return if extracted: for tag in extracted: self.tags.add(tag) serializers.py class QualitySerializer(serializers.ModelSerializer): tags = QualityTagDetailSerializer(many=True) created_by = UserProfileSerializer() updated_by = UserProfileSerializer() class Meta: model = Quality fields = "__all__" read_only_fields = ["quality_num", "tags", "created_by", "updated_by"] I’ve been advised to switch to .create() instead of .build(), but I’d prefer to … -
Django HttpOnly cookies not persisted on iOS Safari and WebView, but work on Chrome and Android ITP
I'm using Django to set HttpOnly and Secure cookies for my React web application. These cookies work perfectly on Chrome (both desktop and mobile) and Android devices. However, I'm encountering a major issue on iOS: -iOS Safari: Cookies are not persisted; they are treated like session cookies and are deleted when the browser is closed. -iOS React Native WebView: Similar to Safari, the cookies are not persisted. -İOS Chrome: It works. -Android React Native WebView: It works. MAX_AGE = 60 * 60 * 24 * 360 COMMON = { "httponly": True, "secure": True, "samesite": "None", "path": "/", "domain": ".kashik.net", "max_age": MAX_AGE, } def set_auth_cookies(response, access_token: str, refresh_token: str): response.set_cookie("refresh_token", refresh_token, **COMMON) response.set_cookie("access_token", access_token, **COMMON) return response I have confirmed that the max_age is set to a long duration, so it's not a session cookie by design. This issue seems to be specific to the iOS ecosystem. What could be causing this behavior on iOS Safari and WebView, and how can I ensure these cookies are properly persisted? <WebView ref={webRef} source={{ uri: WEB_URL }} style={styles.full} /* COOKIE PERSIST */ sharedCookiesEnabled thirdPartyCookiesEnabled incognito={false} /* FIX */ javaScriptEnabled domStorageEnabled allowsInlineMediaPlayback allowsFullscreenVideo mediaCapturePermissionGrantType="grant" startInLoadingState cacheEnabled={false} injectedJavaScriptBeforeContentLoaded={INJECT_BEFORE} injectedJavaScriptBeforeContentLoadedForMainFrameOnly={false} onMessage={handleWebViewMessage} onLoadEnd={() => { setLoadedOnce(true); lastLoadEndAt.current = … -
How to set up an in-project PostgreSQL database for a Django trading app?
I’m working on a Django-based trading platform project. Currently, my setup connects to a hosted PostgreSQL instance (Render). My client has now requested an “in-project PostgreSQL database”. From my understanding, this means they want the database to run locally within the project environment (rather than relying on an external hosted DB). Question: What is the best practice for including PostgreSQL directly with the project? Should I: Use Docker/Docker Compose to spin up PostgreSQL alongside the Django app, Include migrations and a seed dump in the repo so the DB can be created on any machine, or Is there another recommended approach? I want the project to be portable so the client (or other developers) can run it without needing to separately set up PostgreSQL. -
Deployment errors
When I try to deploy my web app built with Windsurf in Heroku, I get the following errors: Error: Unable to generate Django static files. ! ! The 'python manage.py collectstatic --noinput' Django ! management command to generate static files failed. ! ! See the traceback above for details. ! ! You may need to update application code to resolve this error. ! Or, you can disable collectstatic for this application: ! ! $ heroku config:set DISABLE_COLLECTSTATIC=1 ! ! https://devcenter.heroku.com/articles/django-assets Please help fix it. -
How can I set filter by month of folium map(Django project)
I am using folium and I can see in front folium map with markers, I use checkbox, but because of I have two month, I want to add radio button and selecting only one month. I want to have filters by months and also by statuses, but with different labels. I used chatgpt but it doesn't help me. I have also tried many things. What do you suggest as an alternative? I tried this but it is not working: GroupedLayerControl( groups={'groups1': [fg1, fg2]}, collapsed=False, ).add_to(m) My code: def site_location(request): qs = buffer.objects.filter( meter="Yes", ext_operators="No", ).exclude( # hostid="No" ).values('n', 'name', 'village_city', 'region_2', 'rural_urban', 'hostid', 'latitude', 'longitude', 'site_type', 'meter', 'ext_operators', 'air_cond', 'kw_meter', 'kw_month_noc', 'buffer', 'count_records', 'fix_generator', 'record_date', 'id', 'date_meter' ) data = list(qs) if not data: return render(request, "Energy/siteslocation.html", {"my_map": None}) m = folium.Map(location=[42.285649704648866, 43.82418523761071], zoom_start=8) critical = folium.FeatureGroup(name="Critical %(100-..)") warning = folium.FeatureGroup(name="Warning %(70-100)") moderate = folium.FeatureGroup(name="Moderate %(30-70)") positive = folium.FeatureGroup(name="Positive %(0-30)") negative = folium.FeatureGroup(name="Negative %(<0)") check_noc = folium.FeatureGroup(name="Check_noc") check_noc_2 = folium.FeatureGroup(name="Check_noc_2") for row in data: comments_qs = SiteComment.objects.filter(site_id=row["id"]).order_by('-created_at')[ :5] if comments_qs.exists(): comments_html = "" for c in comments_qs: comments_html += ( f"<br><span style='font-size:12px; color:black'>" f"{c.ip_address} - {c.created_at.strftime('%Y-%m-%d %H:%M')}: {c.comment}</span>" ) else: comments_html = "<br><span style='font-size:13px; color:black'>....</span>" html = ( f"<a target='_blank' … -
Django python manage.py runserver problem
When I make this command: python manage.py runserver I receive this response: {"error":"You have to pass token to access this app."} I ran my django app previously without any problems but this time it gives this error Do you have any suggestions to correct it best I am using the correct localhost and the port suggested by the Django app and it previously worked without problems and now I have this issue -
Multiple Data Entry in Django ORM
I have been trying to create a way that my Django database will store data for 7 consecutive days because I want to use it to plot a weekly graph but the problem now is that Django doesn't have a datetime field that does that. '''My Code''' #Model to save seven consecutive days class SevenDayData(models.Model): '''Stores the date of the latest click and stores the value linked to that date in this case our clicks''' day1_date= models.DateField(default=None) day1_value= models.CharField(max_length=20) day2_date= models.DateField(default=None) day2_value= models.CharField(max_length=20) day3_date= models.DateField(default=None) day3_value= models.CharField(max_length=20) day4_date= models.DateField(default=None) day4_value= models.CharField(max_length=20) day5_date= models.DateField(default=None) day5_value= models.CharField(max_length=20) day6_date= models.DateField(default=None) day6_value= models.CharField(max_length=20) day7_date= models.DateField(default=None) day7_value= models.CharField(max_length=20) #updating the model each time the row is saved updated_at= models.DateTimeField(auto_now= True) #function that handles all the saving and switching of the 7 days def shift_days(self, new_value): #getting todays date today= date.today() #shifting every data out from day_7 each time a date is added i.e the 7th day is deleted from the db once the time is due self.day7_date, self.day7_value = self.day6_date, self.day6_value #Overwriting each date with the next one self.day6_date, self.day6_value = self.day5_date, self.day5_value self.day5_date, self.day5_value = self.day4_date, self.day4_value self.day4_date, self.day4_value = self.day3_date, self.day3_value self.day3_date, self.day3_value = self.day2_date, self.day2_value self.day2_date, self.day2_value = self.day1_date, self.day1_value #writing todays … -
session.get not getting corrected file on remote server/local server
I have this piece of code that works complete fine on a single python script. When I try to test it out on local server it returns a html page saying link is not vaild. (I am expecting a PDF downloaded). Both localserver and python script returns a 200. url is the download link of a pdf file on the website. def get_file(url): headers = { 'User-Agent': user_agent, 'Cookie': cookie, } session = requests.Session() try: response = session.get(url, headers=headers, verify=False) filename = response.headers['Content-Disposition'].split('"')[-2] with open(filename, 'wb') as f: f.write(response.content) fileFullPath = os.path.abspath(filename) print(fileFullPath) except requests.exceptions.HTTPError as err: print("file download fail err {}".format(err.response.status_code)) -
Failed to create subscription: LinkedIn Developer API real-time notification error
I’m working on enabling real-time notifications from LinkedIn. I can successfully retrieve access tokens, but when I try to create a real-time notification subscription, the API returns the following error. Could someone please help me understand what might be causing this issue? Error Message { "message": "Failed to create subscription. RestException{_response=RestResponse[headers={Content-Length=13373, content-type=application/x-protobuf2; symbol-table="https://ltx1-app150250.prod.linkedin.com:3778/partner-entities-manager/resources|partner-entities-manager-war--60418946", x-restli-error-response=true, x-restli-protocol-version=2.0.0},cookies=[],status=400,entityLength=13373]}", "status": 400 } My code is below def linkedinCallBack(request): """Handle LinkedIn OAuth callback.""" code = request.GET.get('code') state = request.GET.get('state') if not code or not state: return handle_redirect(request, message_key='missing_params') try: error, state_data = parse_state_json(state) if error: return handle_redirect(request, message_key='missing_params') error, platform = get_social_platform(state_data['platform_id']) if error: return handle_redirect(request, message=error) redirect_uri = request.build_absolute_uri( reverse('social:linkedin_callback')) # Exchange code for access token token_url = 'https://www.linkedin.com/oauth/v2/accessToken' data = { 'grant_type': 'authorization_code', 'code': code, 'redirect_uri': redirect_uri, 'client_id': os.environ.get('LINKEDIN_APP_ID'), 'client_secret': os.environ.get('LINKEDIN_APP_SECRET'), } headers = {'Content-Type': 'application/x-www-form-urlencoded'} response = requests.post(token_url, data=data, headers=headers) if response.status_code != 200: return handle_redirect(request, message_key='token_failed') token_data = response.json() access_token = token_data.get('access_token', None) refresh_token = token_data.get('refresh_token', None) refresh_token_expires_in = token_data.get( 'refresh_token_expires_in', None) expires_in = token_data.get('expires_in', 3600) if not access_token: return handle_redirect(request, message_key='token_failed') LINKEDIN_API_VERSION = os.environ.get('LINKEDIN_API_VERSION') org_url = "https://api.linkedin.com/v2/organizationalEntityAcls" params = { 'q': 'roleAssignee', 'role': 'ADMINISTRATOR', 'state': 'APPROVED', 'projection': '(elements*(*,organizationalTarget~(id,localizedName)))' } headers = { 'Authorization': f'Bearer {access_token}', 'X-Restli-Protocol-Version': '2.0.0', 'LinkedIn-Version': LINKEDIN_API_VERSION } … -
Does anyone have any idea about in-project PostgreSQL database? [closed]
I have been recently working on a trading project. Now client has a requirement for in-project Postgresql Database. Have anyone worked across this or maybe something similar??? -
Building development image with Nodejs and production without NodeJS (with only precompiled files)
I have a Django application, which is using TailwindCSS for styling (using the django-tailwind package). I am developing locally with docker compose and plan to deploy using the same. So I have the following requirements For development: I need to run the python manage.py tailwind start or npm run dev command so that the postcss watcher rebuilds the static files when I am developing the application (this requires NodeJS) For Production: I compile the CSS files at build time and do not need NodeJS overhead. I can always create two Dockerfiles for development and production, but I do not want to do that unless absolutely necessary. How can I do both of these in a single Dockerfile. This is the current Dockerfile I have ARG BUILD_TYPE=production FROM ghcr.io/astral-sh/uv:python3.13-bookworm-slim AS base-builder # Set environment variables to optimize Python ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Set environment variables to optimize UV ENV UV_COMPILE_BYTECODE=1 ENV UV_SYSTEM_PYTHON=1 WORKDIR /app # Install the requirements COPY uv.lock . COPY pyproject.toml . # Update the package list and install Node.js RUN apt-get update && \ apt-get install -y nodejs npm && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* FROM base-builder AS production-builder RUN echo "Running the Production … -
dj-rest-auth + allauth not sending email
Context: I'm setting DRF + dj-rest-auth + allauth + simple-jwt for user authentication. Desired behaviour: Register with no username, only email. Authorize login only if email is verified with a link sent to email. Social login to be added. Problem: It seems that confirmation email is not being sent. When I run the following test I see that it wanted to send some email but it's not found anywhere. Test code: client = APIClient() url = reverse("rest_register") # dj-rest-auth register endpoint # Register a user data = { "email": "user1@example.com", "password1": "StrongPass123!", "password2": "StrongPass123!", } response = client.post(url, data, format="json") assert response.status_code == 201, response.data print(response.data) # Manually verify the user from allauth.account.models import EmailConfirmation user = User.objects.get(email="user1@example.com") from django.core import mail print(f'Amount of sent emails: {len(mail.outbox)}') print(f'Email Confimation exists: {EmailConfirmation.objects.filter(email_address__email=user.email).exists()}') This prints: {'detail': 'Verification e-mail sent.'} Amount of sent emails: 0 Email Confimation exists: False My code: core/urls.py from django.contrib import admin from django.urls import include, path urlpatterns = [ path('api/auth/', include('authentication.urls')), path("admin/", admin.site.urls), path("accounts/", include("allauth.urls")), ] authentication/urls.py from dj_rest_auth.jwt_auth import get_refresh_view from dj_rest_auth.registration.views import RegisterView, VerifyEmailView from dj_rest_auth.views import LoginView, LogoutView, UserDetailsView from django.urls import path from rest_framework_simplejwt.views import TokenVerifyView urlpatterns = [ path("register/", RegisterView.as_view(), name="rest_register"), path("register/verify-email/", VerifyEmailView.as_view(), … -
Celery task called inside another task always goes to default queue even with queue specified
I’m running Celery with Django and Celery Beat. Celery Beat triggers an outer task every 30 minutes, and inside that task I enqueue another task per item. Both tasks are decorated to use the same custom queue, but the inner task still lands in the default queue. from celery import shared_task from django.db import transaction @shared_task(queue="outer_queue") def sync_all_items(): """ This outer task is triggered by Celery Beat every 30 minutes. It scans the DB for outdated items and enqueues a per-item task. """ items = Item.objects.find_outdated_items() for item in items: # I expect this to enqueue on outer_queue as well process_item.apply_async_on_commit(args=(item.pk,)) @shared_task(queue="outer_queue") def process_item(item_id): do_some_processing(item_id=item_id) Celery beat config: CELERY_BEAT_SCHEDULE = { "sync_all_items": { "task": "myapp.tasks.sync_all_items", "schedule": crontab(minute="*/30"), # Beat is explicitly sending the outer task to outer_queue "options": {"queue": "outer_queue"}, } } But, when I run the process_item task manually e.g. in the Django view, it respect the decorator and lands in expected queue. I’ve tried: Adding queue='outer_queue' to apply_async_on_commit Calling process_item.delay(item.pk) instead Using .apply_async(args=[item.pk], queue='outer_queue') inside transaction.on_commit But no matter what, the inner tasks still show up in the default queue. -
Django + SimpleJWT: Access tokens sometimes expire immediately ("credentials not provided") when calling multiple endpoints
I’m building a Vue 3 frontend (deployed on Vercel at example.com) with a Django REST Framework backend (deployed on Railway at api.example.com). Authentication uses JWT access/refresh tokens stored in HttpOnly cookies (access, refresh). Access token lifetime = 30 minutes Refresh token lifetime = 1 day Cookies are set with: HttpOnly; Secure; SameSite=None; Domain=.example.com Django timezone settings: LANGUAGE_CODE = "en-us" TIME_ZONE = "Africa/Lagos" USE_I18N = True USE_TZ = True The problem When the frontend calls multiple API endpoints simultaneously (e.g. 5 requests fired together), some succeed but others fail with: 401 Unauthorized {"detail":"Authentication credentials were not provided."} In the failing requests I can see the cookies are sent: cookie: access=...; refresh=... But SimpleJWT still rejects the access token, sometimes immediately after login. It looks like the exp claim in the access token is already in the past when Django validates it. What I’ve tried Verified cookies are set with correct domain and withCredentials: true. Implemented an Axios response interceptor with refresh token retry. Ensured CookieJWTAuthentication checks both Authorization header and access cookie. -
"Django: Cannot use ImageField because Pillow is not installed (Python 3.13, Windows)
PS C:\Users\ltaye\ecommerce> python manage.py runserver Watching for file changes with StatReloader Performing system checks... Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 1043, in _bootstrap_inner self.run() ~~~~~~~~^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 994, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\core\management\commands\runserver.py", line 134, in inner_run self.check(**check_kwargs) ~~~~~~~~~~^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\core\management\base.py", line 569, in check raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System check identified some issues: ERRORS: store.Product.image: (fields.E210) Cannot use ImageField because Pillow is not installed. HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "python -m pip install Pillow". System check identified 1 issue (0 silenced). I created a Django project and added a model with an ImageField. When I run python manage.py runserver, I get the following error: SystemCheckError: Cannot use ImageField because Pillow is not installed. I expected the server to start normally and let me upload images. I already tried: Running python -m pip install Pillow Running pip install Pillow inside my project virtual environment Upgrading pip with python -m pip install --upgrade pip But the error still shows up when I start the server. I’m using Python 3.13 on Windows 11. -
How to write a documentation for project/django-project?
How do you write documentation for your projects? How to improve readability of documentation? Do you have any tips for writing documentation? Thanks! Im trying to write my first documenation for django API project and I need a help -
IHow do I coonect a webapp to a thermal printer for printing
I built a web app and bought a thermal printer, I geenrate recipt from the web app, but don't know how to send it to the printer to print, also the connection is not stable. which printer is cost effective and I can use that has stable connection How can I send the recipt for printing directly from my web app without third party intervention I bought a printer already but I have to reconnect on eevry print, and it hard even reconnecting, am using django for my backend and react for front end. I have not been able to print directly from my app, all other printer were through third party app -
Ubuntu 22.04 Django cronjob - No MTA installed, discarding output - Error
If I run this source /var/www/django/env/bin/activate && cd /var/www/django/ && python manage.py cron in the cockpit gui terminal (ubuntu-server 22.04) an email is sent. But if I run it as a cronjob, in crontab: * * * * * administrator source /var/www/html/django/env/bin/activate && cd /var/www/html/django/ && python manage.py cron I'm getting the error (CRON) info (No MTA installed, discarding output) What I'm missing?