Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How to persist multi-step form data between views in Django without committing to DB?
One-line project/context. Short description of the problem and constraints. Minimal code: models, the two views, form classes, snippets of urls.py. Exact observed behavior or error. Alternatives tried (session, temp model) and why they’re insufficient. Ask: “Given X and Y, which approach best ensures Z, and how to implement it? -
iOS/web Auth Client ID Handling for Google Sign In
To preface, I'm not asking for a direct fix here, I'm just curious if what I'm doing is the appropriate auth flow for setting dynamic client ID based on device platform. I am 2 applications that use the same Django Allauth backend. One of them is for web, and the other is in Flutter (iOS). Both applications would call an endpoint that routes to GoogleDirectLogin(APIView) note that the implementation I currently have a method get_client_id that dynamically use the appropriate client ID based on device type (X-Client-Type) class GoogleDirectLogin(APIView): permission_classes = [AllowAny] def post(self, request): # Get token from request token = request.data.get('id_token') or request.data.get('access_token') if not token: return Response( {'error': 'Missing token in request'}, status=status.HTTP_400_BAD_REQUEST ) # importing middleware is crucial for checking multiple client ID based on JSON Header value from auth_backend.middleware import get_client_id # Import from middleware client_id = get_client_id(request) print(f"using client ID: {client_id}") try: # Verify Google token identity_data = id_token.verify_oauth2_token( token, google_requests.Request(), client_id, clock_skew_in_seconds=10 ) # Validate issuer if identity_data.get('iss') not in ['accounts.google.com', 'https://accounts.google.com']: return Response( {'error': 'Invalid token issuer'}, status=status.HTTP_400_BAD_REQUEST ) # # Exchange token with your internal API response = requests.post( settings.INTERNAL_API_CALL_GOOGLE, json={'access_token': token} ) response.raise_for_status() auth_data = response.json() return Response({ 'access': auth_data['access'], … -
Flaky Circle CI tests (django): ImportError: cannot import name "task" from "app.tasks" (unknown location)
Sometimes, I have many flaky test failures due to one error: ImportError: cannot import name 'task_import_events_to_db' from 'app.tasks' (unknown location) It seems the tests fail because of this import error. Meanwhile, other branches pass without issues, and tests also pass normally after merging. App is in INSTALLED APPS, locally everything works. But not on circle ci Stack: Django, PostgreSQL, Redis -
KeyError 'email' for django-authtools UserCreationForm
I am experiencing an error for which I can’t find an origin. I believe the error stems from GitHub - fusionbox/django-authtools: A custom User model for everybody!, and disclaimer, I have asked this same question on the project’s GitHub repository over a year ago, but nobody has answered, hopefully someone may have some insights here. Every now and then Django complains that email is not in self.fields[User.USERNAME_FIELD], when I try to open the admin 'Add user' form, see below I can see that email isn’t in self.fields but why it isn’t is not clear to me. What absolutely confuses me is that the error is sporadic: If I experience the error in my main browser window, I don’t in a new Incognito window Restarting the app makes the error go away, for some time, but then it reappears, and the only way to solve it is to restart the app. My UserCreationForm, a child of authtools’s UserCreationForm, looks like this class UserCreationForm(UserCreationForm): """ A UserCreationForm with optional password inputs. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.fields["password1"].required = False self.fields["password2"].required = False # If one field gets autocompleted but not the other, our 'neither # password or both password' validation … -
Django check at runtime if code is executed under "runserver command" or not
I've a project based on django that wrap some custom code, this code during import load some heavy file before to be executed. I need to check if imports are executed under "runserver command" or not, in this way I can prevent loading heavy files during django installation. How can i check if code is executed under runserver command -
Django REST project doesn’t detect apps inside the “apps” directory when running makemigrations
I have a Django REST project where I created a directory called apps to store all my apps. Each app is added to the INSTALLED_APPS list in my settings file like this: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # APPS 'apps.accounts.apps.AccountsConfig', 'apps.ads.apps.AdsConfig', ] But when I run python manage.py makemigrations, Django doesn’t detect any changes — it seems like it doesn’t recognize my apps at all. Can anyone help me figure out what might be wrong? Thanks a lot -
Cannot query "admin": Must be "ChatMessage" instance in Django
In View Function it can show this error Request Method:GETRequest URL:http://127.0.0.1:8000/inbox/Django Version:4.2.25Exception Type:ValueErrorException Value:Cannot query "admin": Must be "ChatMessage" instance.Exception Location:D:\Socialmedia\.venv\lib\site-packages\django\db\models\sql\query.py, line 1253, in check_query_object_typeRaised during:core.views.messages.inboxPython Executable:D:\Socialmedia\.venv\Scripts\python.exePython Version:3.9.13Python Path:['D:\\Socialmedia', 'C:\\Users\\ER-RPJ\\AppData\\Local\\Programs\\Python\\Python39\\python39.zip', 'C:\\Users\\ER-RPJ\\AppData\\Local\\Programs\\Python\\Python39\\DLLs', 'C:\\Users\\ER-RPJ\\AppData\\Local\\Programs\\Python\\Python39\\lib', 'C:\\Users\\ER-RPJ\\AppData\\Local\\Programs\\Python\\Python39', 'D:\\Socialmedia\\.venv', 'D:\\Socialmedia\\.venv\\lib\\site-packages'] def inbox(request): if request.user.is_authenticated: user_id = request.user chat_messages = ChatMessage.objects.filter( id__in=Subquery( User.objects.filter( Q(chat_sender__chat_receiver=user_id) | Q(chat_receiver__chat_sender=user_id) ).distinct().annotate( last_msg=Subquery( ChatMessage.objects.filter( Q(sender=OuterRef('id'), receiver=user_id) | Q(receiver=OuterRef('id'), sender=user_id) ).order_by('-id')[:1].values_list('id', flat=True) ) ).values_list('last_msg', flat=True).order_by('-id') ) ).order_by('-id') context = { 'chat_messages': chat_messages, } return render(request, 'chat/inbox.html', context) In My model class ChatMessage(models.Model): user = models.ForeignKey(User, on_delete=models.SET_NULL, null=True, blank=True, related_name='chat_user') chat_sender = models.ForeignKey(User, on_delete=models.SET_NULL, null=True, blank=True, related_name='chat_sender') chat_receiver = models.ForeignKey(User, on_delete=models.SET_NULL, null=True, blank=True, related_name='chat_receiver') message = models.TextField() is_read = models.BooleanField(default=False) date = models.DateTimeField(auto_now_add=True) mid=ShortUUIDField(length=7,max_length=25,alphabet='abcdefghijklmnopqrstuvwxyz') # def __str__(self): # return self.user class Meta: verbose_name_plural = 'Chat messages' -
Is it reasonable to use Cloud storage for async webhook processing on Cloud Run
I'm processing webhooks on Cloud Run (Django) that need async handling because processing takes 30+ seconds but the webhook provider times out at 30s. Since Cloud Run is stateless and spins up per-request (no persistent background workers like Celery), I'm using this pattern: # 1. Webhook endpoint def receive_webhook(request): blob_name = f"webhooks/{uuid.uuid4()}.json" bucket.blob(blob_name).upload_from_string(json.dumps(request.data)) webhook = WebhookPayload.objects.create(gcs_path=blob_name) create_cloud_task(payload_id=webhook.id) return Response(status=200) # Fast response And then our cloud task calls the following endpoint with the unique path to the cloud storage url passed from the original webhook endpoint: def process_webhook(request): webhook = WebhookPayload.objects.get(id=request.data['payload_id']) payload = json.loads(bucket.blob(webhook.gcs_path).download_as_text()) process_data(payload) # 30+ seconds bucket.blob(webhook.gcs_path).delete() My main query points: Is GCS + Cloud Tasks the right pattern for Cloud Run's model, or is storing JSON directly temporarily in a django model a better approach since Cloud Tasks handles the queueing? Should I be using Pub/Sub instead? My understanding is that pubsub would be more appropriate for broadcasting to numerous subscribers, currently I only have the one django monolith. Thanks for any advice that comes my way. -
Django ORM: Add integer days to a DateField to annotate next_service and filter it (PostgreSQL)
I am trying to annotate a queryset with next_service = last_service + verification_periodicity_in_days and then filter by that date. I am on Django 5.2.6 with PostgreSQL. last_service is a DateField. verification_periodicity lives on a related SubCategory and is the number of days (integer). Models (minimal): # main/models.py class Category(models.Model): name = models.CharField(max_length=100) class SubCategory(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name='subcategories') name = models.CharField(max_length=100) verification_periodicity = models.IntegerField() # days class Asset(models.Model): sub_category = models.ForeignKey(SubCategory, on_delete=models.PROTECT) last_service = models.DateField(null=True, blank=True) Goal: Compute next_service = last_service + verification_periodicity days in the database, expose it in the API, and support filtering like ?next_date__gte=2025-12-06. What I tried: Simple cast and multiply: from django.db.models import ExpressionWrapper, F, DateField, IntegerField from django.db.models.functions import Cast qs = qs.annotate( next_service = ExpressionWrapper( F('last_service') + Cast(F('sub_category__verification_periodicity'), IntegerField()) * 1, output_field=DateField() ) ) This does not shift by days and later caused type issues. Filtering by the annotated date also did not work as expected. Using a Python timedelta: from datetime import timedelta qs = qs.annotate( next_service = F('last_service') + timedelta(days=1) * F('sub_category__verification_periodicity') ) This produced a duration in seconds in the serialized output. Example: "next_service": "86400.0" for one day, rather than a proper date. I need a date. Errors seen along … -
Stuck with django asgi server (dpahne) and aws eb (with docker)
I’m trying to deploy a Django application that uses Channels + ASGI + Daphne on AWS Elastic Beanstalk with the Docker platform. My container builds successfully, migrations run, and Daphne starts properly on 0.0.0.0:8000. Logs show the ASGI server is running without errors. The issue is that Elastic Beanstalk is not routing traffic to the Daphne server inside the Docker container. Here’s what’s happening: docker logs shows Daphne listening on 0.0.0.0:8000 The container starts cleanly (no errors) curl <container-ip>:8000/ works curl http://localhost/ on the host does not reach Daphne /health/ returns nothing because Django had no route (fixed now) Elastic Beanstalk environment loads but the site doesn’t respond externally It seems like NGINX inside EB is not proxying requests to the container I think I need a correct NGINX proxy config or a proper EB .config file that routes traffic to the container’s internal IP/port. Can someone provide a working example of: ✅ Dockerfile ✅ entrypoint.sh ✅ EB .ebextensions config for ASGI/Daphne ✅ NGINX proxy config for forwarding WebSocket + HTTP traffic ✅ Any extra EB settings needed for Channels Basically, I need the correct setup so EB can forward all traffic to Daphne inside a Docker container. Any working … -
unable to access my EC2 ubuntu server with public ip:8000 [closed]
I associated public elastic ip address to my EC2 instance , Installed Virtual environment properly, python manage.py runserver 0.0.0.0:8000 is executed properly Postgresql is connected at port 5432 properly. Port 22, 80, 443 firewall are allowed properly here is the security group screen shot attached here are outbound rules when I run the sudo ufw status verbose I get the following it indicates my all needed ports are properly attached my routing tables , Network ACL are also set properly but when I try to access my server I get following errors. -
Django Not Saving Form Data
I fill the Django form in contact.html file. But form data is not saved in database or another place. There is no error or warning while saving the form data. Form screenshot: Form screenshot. views.py: from .forms import CnForm def contact(request): template = loader.get_template('contact.html') form = CnForm(request.POST or None) if form.is_valid(): form.save() context = {'form': form } return HttpResponse(template.render(context, request)) models.py: from django.db import models class FModel(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) def __str__(self): return self.first_name forms.py: from django import forms from .models import FModel class CnForm(forms.ModelForm): class Meta: model = FModel fields = "__all__" contact.html: <div class="contact-container"> <form action = "" method = "post"> {% csrf_token %} {{ form }} <input type="submit" value="Submit"> </form> </div> -
403 Forbidden: "CSRF Failed: CSRF token missing." on DRF api-token-auth/ after applying csrf_exempt
I'm encountering a persistent 403 Forbidden error with the detail: CSRF Failed: CSRF token missing. This happens when trying to obtain an authentication token using Django REST Framework's built-in api-token-auth/ endpoint. Context I am sending a POST request from Postman (using raw and application/json for the body). The CSRF protection is interfering because Postman, as an external client, doesn't handle session cookies or CSRF tokens. I attempted to fix this by explicitly applying the @csrf_exempt decorator to the view in my urls.py, but the error remains. Configuration and Code Here are the relevant snippets from my project setup: 1. settings.py (Middleware and DRF Authentication) My middleware includes CSRF protection, and I have SessionAuthentication enabled, which seems to be causing the conflict. MIDDLEWARE = [ "django.middleware.security.SecurityMiddleware", "django.contrib.sessions.middleware.SessionMiddleware", "django.middleware.common.CommonMiddleware", "django.middleware.csrf.CsrfViewMiddleware", "django.contrib.auth.middleware.AuthenticationMiddleware", "django.contrib.messages.middleware.MessageMiddleware", "django.middleware.clickjacking.XFrameOptionsMiddleware", ] REST_FRAMEWORK = { 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 10, 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', 'rest_framework.authentication.SessionAuthentication', ), 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), } 2. urls.py (Where csrf_exempt is applied) This is how I'm currently trying to exempt the view. I have imported csrf_exempt from django.views.decorators.csrf. from django.contrib import admin from django.urls import path,include from rest_framework.authtoken import views from django.views.decorators.csrf import csrf_exempt from api import views as api_views urlpatterns = [ path("home/",include("expenses.urls")), path("admin/", admin.site.urls), path("api/",include("api.urls")), … -
'django.db.utils.ProgrammingError: relation "users_user" does not exist' error while running ' python manage.py migrate_schemas --shared'
AccrediDoc - Multi-tenant Accreditation Management System A comprehensive Django-based multi-tenant accreditation management system designed for healthcare organizations in India. Manage NABL, NABH, ISO 15189 and other healthcare accreditations with ease. Features Multi-tenant Architecture: Secure, isolated environments for multiple organizations Document Management: Upload, version, and track accreditation documents with expiry alerts Compliance Tracking: Monitor compliance status with interactive checklists and evidence tracking User Management: Role-based access control with different user roles Accreditation Types: Support for multiple accreditation standards and clauses Reporting: Generate comprehensive compliance and performance reports Audit Logging: Complete audit trail for all system activitieswhile running migration for sharred schema i got following error it is multitenant Django app with five different apps System check identified some issues: WARNINGS: ?: (staticfiles.W004) The directory 'D:\workik_projects\AccrediDoc_V3\static' in the STATICFILES_DIRS setting does not exist. [standard:public] === Starting migration [standard:public] System check identified some issues: WARNINGS: ?: (staticfiles.W004) The directory 'D:\workik_projects\AccrediDoc_V3\static' in the STATICFILES_DIRS setting does not exist. [standard:public] Operations to perform: [standard:public] Apply all migrations: admin, auth, contenttypes, django_celery_beat, sessions [standard:public] Running migrations: [standard:public] Applying admin.0001_initial... Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_V3\acc_venv\lib\site-packages\django\db\backends\utils.py", line 103, in _execute return self.cursor.execute(sql) psycopg2.errors.UndefinedTable: relation "users_user" does not exist The above exception was the direct cause of … -
Django: Annotate queryset based on the existence of many-to-many relationships
I am using Django 5.2 with Postgres 17 and try to modify the queryset of a ListView for row-based access control. Items are to be included in the queryset either if the user is part of a project that is directly related with the item, or if the project that is related with the item is set to be "visible". The user is presented with different information and interaction possibilities in the front end depending which of the two options ("accessible" vs "visible") was true. The following solution in theory yields the right result: def get_queryset(self): queryset = self.model.objects.all() if self.request.user.is_superuser: return queryset.annotate(accessible=Value(True)) # Cache all projects that the user is part of. A project is a Django-group # (one-to-one relationship) with some additional attributes. user_projects = Project.objects.filter(group__in=self.request.user.groups.all()) # Get all items that the user can access and mark them accordingly. accessible_items = ( queryset .filter(groups__project__in=user_projects) .annotate(accessible=Value(True)) ) # Get all items that the user can see (but not access), and mark them accordingly. visible_items = ( queryset .filter(groups__project__in=Project.objects.filter(visible=True)) .exclude(groups__project__in=user_projects) .annotate(accessible=Value(False)) ) return accessible_items.union(visible_items) The approach is simple enough and I'm not too concerned about efficiency, but there is a significant drawback. I'm using a union of two querysets, and … -
Django views spawn the error "cleansed = [self.cleanse_setting("", v) for v in value]" and go in infinite loops
I have two views that spawn these messages in "sudo systemctl status gunicorn": gunicorn: Nov 07 10:46:36 mysite gunicorn[2107]: cleansed = [self.cleanse_setting("", v) for v in value] Nov 07 10:46:36 mysite gunicorn[2107]: File "/home/mysite/anaconda3/envs/mysite/lib/python3.10/site-packages/django/views/debug.py", line 135, in cleanse_setting The whole site works, but any view.py that accesses a certain class spawn these errors. The views were totally working with no problem. I added 4 fields to the class's 'models.py' then removed them and migrated the database with: python manage.py makemigrations python manage.py migrate after that, the error started to show. Any help would be appreciated. -
appAccountToken not sent to backend during Apple coupon (reward) redemption using StoreKit 2
I'm integrating Apple In-App Purchases with StoreKit 2 in an iOS app. The backend (Django) handles subscription verification and links each transaction to a user using appAccountToken. Everything works fine for normal subscription purchases — the app sets the appAccountToken correctly, and it reaches the backend through the transaction data. However, during coupon / reward redemption (using Apple’s Reward Redemption Sheet), the appAccountToken is not included in the transaction payload that Apple sends to the backend. As a result, my backend can’t associate the redeemed subscription with the correct user account. How can we ensure that the appAccountToken is included (or reattached) during reward / coupon redemption using StoreKit 2? Is there any recommended way to set or restore the appAccountToken during the reward redemption flow? -
Upgrading Django to 5.2.7 causing error wth rest_framework_simplejwt as django.utils.timezone is depreciated
I am upgrading my Django project to v5.2.7. After installing requirements.txt with the upgraded versions of all libraries, I ran the command to validate the code python manage.py check But it is throwing error ImportError: Could not import 'rest_framework_simplejwt.authentication.JWTAuthentication' for API setting 'DEFAULT_AUTHENTICATION_CLASSES'. ImportError: cannot import name 'utc' from 'django.utils.timezone' (...\envs\localenv\Lib\site-packages\django\utils\timezone.py). Requirements.txt asgiref==3.8.1 certifi==2023.11.17 Django==5.2.7 django-cors-headers==4.3.1 djangorestframework==3.14.0 mysqlclient==2.2.0 PyJWT==2.8.0 pytz==2023.3 newrelic==9.0.0 djangorestframework_simplejwt==5.2.0 sqlparse==0.4.4 -
Error running developmental server in Django project some issue with migration
I have developed a project for SASS with django tenate. Whie migration i got following error it seem it is related to migration file (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations reports Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\checks\registry.py", line 89, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\contrib\auth\checks.py", line 101, in check_user_model if isinstance(cls().is_anonymous, MethodType): File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\base.py", line 537, in _init_ val = field.get_default() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\fields\related.py", line 1176, in get_default if isinstance(field_default, self.remote_field.model): TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations report Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( … -
Django static images not showing on Vercel
I'm deploying my Django project to Vercel, and everything works fine locally but after deployment, images from the static folder are not showing. Project structure: datafam/ ├── settings.py ├── wsgi.py static/ └── teams/ └── image/ ├── Abu Sofian.webp ├── Crystal Andrea Dsouza.webp templates/ └── teams/ └── index.html staticfiles/ vercel.json requirements.txt file vercel.json: { "builds": [ { "src": "datafam/wsgi.py", "use": "@vercel/python", "config": { "maxLambdaSize": "100mb", "runtime": "python3.12" } } ], "routes": [ { "src": "/(.*)", "dest": "datafam/wsgi.py" } ] } What I’m trying to achieve I just want my static images (under /static/teams/image/) to be correctly served after deploying to Vercel — exactly the same way Django serves them locally using {% static %} in templates. file index.html: {% extends "base.html" %} {% load static %} {% block head_title %} {{title}} {% endblock head_title %} {% block content %} <section class="dark:bg-neutral-900 bg-white py-20" > <div class="container mx-auto px-4 text-center"> <p class="text-4xl md:text-5xl font-extrabold dark:text-gray-100 text-gray-800">Team Us</p> <p class="mt-16 text-lg text-gray-600 dark:text-gray-400 max-w-4xl mx-auto"> Meet the passionate and dedicated individuals who form the core of our community. Our team is committed to fostering a collaborative and supportive environment for all data enthusiasts. </p> </div> {# Mengubah container untuk menggunakan flex-wrap dan gap … -
"SMTPAuthenticationError: Authentication disabled due to threshold limitation" on production server on AWS
I've set-up email sending in my Django project that is deployed on AWS. When I run it locally the emails go out without a problem, but when I try it on production server on EC2 ubuntu VM, I get smtplib.SMTPAuthenticationError: (535, b'5.7.0 Authentication disabled due to threshold limitation') error. My settings are the same on both machines: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'mail.my-provider.com' EMAIL_PORT = 1025 EMAIL_HOST_USER = 'me@my-provider.com' EMAIL_HOST_PASSWORD = 'mypassword' Is there anything specific I need to do to be able to send emails form AWS? My outbound rules are set wide open. -
Cloud Storage + Cloud Tasks for async webhook processing on Cloud Run - best practice
I've been looking around for an answer to this, but struggling to find something definitive. My apologies if I've overlooked something obvious. I'm processing webhooks on Cloud Run (Django) that need async handling because processing takes 30+ seconds but the webhook provider times out at 30s. Since Cloud Run is stateless and spins up per-request (no persistent background workers like Celery), I'm using this pattern: # 1. Webhook endpoint def receive_webhook(request): blob_name = f"webhooks/{uuid.uuid4()}.json" bucket.blob(blob_name).upload_from_string(json.dumps(request.data)) webhook = WebhookPayload.objects.create(gcs_path=blob_name) create_cloud_task(payload_id=webhook.id) return Response(status=200) # Fast response And then our cloud task calls the following endpoint with the unique path to the cloud storage url passed from the original webhook endpoint: def process_webhook(request): webhook = WebhookPayload.objects.get(id=request.data['payload_id']) payload = json.loads(bucket.blob(webhook.gcs_path).download_as_text()) process_data(payload) # 30+ seconds bucket.blob(webhook.gcs_path).delete() Is GCS + Cloud Tasks the right pattern for Cloud Run's stateless model, or is storing JSON directly temporarily in a django model fine since Cloud Tasks handles the queueing? Does temporary storage in GCS rather than in Postgres provide meaningful benefits? Should I be using Pub/Sub instead? Seems more for event broadcasting; I just need to invoke one endpoint. Thanks for any advice that comes my way. -
How do you customise these 3 dots in wagtail?
i want to add another option in it which is send Email, which will send email to all the subscribers class FeaturedPageViewSet(SnippetViewSet): model = FeaturedPages menu_label = "Featured Pages" menu_icon = "grip" menu_order = 290 add_to_settings_menu = False exclude_from_explorer = False list_display = ("blog", "workshop", "ignore") search_fields = ("blog", "workshop", "ignore") list_filter = ("ignore",)``` (https://i.sstatic.net/fzKv5gM6.png) -
Django app static files recently started returning 404s, deployed by Heroku
The static files in my Django production app recently started returning 404s. Screenshot of production site with dev tools open Context This project has been deployed without issue for several years. I have not pushed changes since September. I am unsure when the 404s began. The staging version of my Heroku app loads the static assets Screenshot of staging site with dev tools open Investigation I read the most recent Whitenoise documentation; my app still follows their setup guidance. You can see my settings here (n.b., the project is open source). I also ran heroku run python manage.py collectstatic --app APP_NAME directly. I am aware of this related post, too: Heroku static files not loading, Django -
Django Rest Framework ListAPIView user permissions - Cant seem to get them working
I have a Django project with DjangoRestFramework. I have a simple view, Facility, which is a ListAPIView. Permissions were generated for add, change, delete and view. I have create a new user, and have assigned him no permissions. Yet he is able to call GET on facility. class FacilityListView(ListAPIView): queryset = Facility.objects.all() serializer_class = FacilitySerializer permission_classes = [IsAuthenticated, DjangoModelPermissions] def get(self, request): self.check_permissions(request) facilities = Facility.objects.all() serializer = FacilitySerializer(facilities, many=True) return Response(serializer.data) If I test user permissions, I get an empty list. perms = list(user.get_all_permissions()) If I check whether the permission exists, I get the Facility model as result a = Permission.objects.get(codename='view_facility') However, if I check which permissions are required for Facility, I also get an empty list. p = perm.get_required_permissions('GET', Facility) The model is as basic as it can be from django.db import models class Facility(models.Model): name = models.CharField(max_length=200) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.name This is what it says in my settings, and I have no custom permissions classes or anything. REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'API.authentication.JWTAuthenticationFromCookie', ), 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', 'rest_framework.permissions.DjangoModelPermissions', ], } Unfortunately, I have not been able to find an answer to my problem. If anyone has any idea, that would be greatly appreciated!