Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Error running developmental server in Django project some issue with migration
I have developed a project for SASS with django tenate. Whie migration i got following error it seem it is related to migration file (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations reports Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\checks\registry.py", line 89, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\contrib\auth\checks.py", line 101, in check_user_model if isinstance(cls().is_anonymous, MethodType): File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\base.py", line 537, in _init_ val = field.get_default() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\fields\related.py", line 1176, in get_default if isinstance(field_default, self.remote_field.model): TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations report Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( … -
Django static images not showing on Vercel
I'm deploying my Django project to Vercel, and everything works fine locally but after deployment, images from the static folder are not showing. Project structure: datafam/ ├── settings.py ├── wsgi.py static/ └── teams/ └── image/ ├── Abu Sofian.webp ├── Crystal Andrea Dsouza.webp templates/ └── teams/ └── index.html staticfiles/ vercel.json requirements.txt file vercel.json: { "builds": [ { "src": "datafam/wsgi.py", "use": "@vercel/python", "config": { "maxLambdaSize": "100mb", "runtime": "python3.12" } } ], "routes": [ { "src": "/(.*)", "dest": "datafam/wsgi.py" } ] } What I’m trying to achieve I just want my static images (under /static/teams/image/) to be correctly served after deploying to Vercel — exactly the same way Django serves them locally using {% static %} in templates. file index.html: {% extends "base.html" %} {% load static %} {% block head_title %} {{title}} {% endblock head_title %} {% block content %} <section class="dark:bg-neutral-900 bg-white py-20" > <div class="container mx-auto px-4 text-center"> <p class="text-4xl md:text-5xl font-extrabold dark:text-gray-100 text-gray-800">Team Us</p> <p class="mt-16 text-lg text-gray-600 dark:text-gray-400 max-w-4xl mx-auto"> Meet the passionate and dedicated individuals who form the core of our community. Our team is committed to fostering a collaborative and supportive environment for all data enthusiasts. </p> </div> {# Mengubah container untuk menggunakan flex-wrap dan gap … -
"SMTPAuthenticationError: Authentication disabled due to threshold limitation" on production server on AWS
I've set-up email sending in my Django project that is deployed on AWS. When I run it locally the emails go out without a problem, but when I try it on production server on EC2 ubuntu VM, I get smtplib.SMTPAuthenticationError: (535, b'5.7.0 Authentication disabled due to threshold limitation') error. My settings are the same on both machines: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'mail.my-provider.com' EMAIL_PORT = 1025 EMAIL_HOST_USER = 'me@my-provider.com' EMAIL_HOST_PASSWORD = 'mypassword' Is there anything specific I need to do to be able to send emails form AWS? My outbound rules are set wide open. -
Cloud Storage + Cloud Tasks for async webhook processing on Cloud Run - best practice
I've been looking around for an answer to this, but struggling to find something definitive. My apologies if I've overlooked something obvious. I'm processing webhooks on Cloud Run (Django) that need async handling because processing takes 30+ seconds but the webhook provider times out at 30s. Since Cloud Run is stateless and spins up per-request (no persistent background workers like Celery), I'm using this pattern: # 1. Webhook endpoint def receive_webhook(request): blob_name = f"webhooks/{uuid.uuid4()}.json" bucket.blob(blob_name).upload_from_string(json.dumps(request.data)) webhook = WebhookPayload.objects.create(gcs_path=blob_name) create_cloud_task(payload_id=webhook.id) return Response(status=200) # Fast response And then our cloud task calls the following endpoint with the unique path to the cloud storage url passed from the original webhook endpoint: def process_webhook(request): webhook = WebhookPayload.objects.get(id=request.data['payload_id']) payload = json.loads(bucket.blob(webhook.gcs_path).download_as_text()) process_data(payload) # 30+ seconds bucket.blob(webhook.gcs_path).delete() Is GCS + Cloud Tasks the right pattern for Cloud Run's stateless model, or is storing JSON directly temporarily in a django model fine since Cloud Tasks handles the queueing? Does temporary storage in GCS rather than in Postgres provide meaningful benefits? Should I be using Pub/Sub instead? Seems more for event broadcasting; I just need to invoke one endpoint. Thanks for any advice that comes my way. -
How do you customise these 3 dots in wagtail?
i want to add another option in it which is send Email, which will send email to all the subscribers class FeaturedPageViewSet(SnippetViewSet): model = FeaturedPages menu_label = "Featured Pages" menu_icon = "grip" menu_order = 290 add_to_settings_menu = False exclude_from_explorer = False list_display = ("blog", "workshop", "ignore") search_fields = ("blog", "workshop", "ignore") list_filter = ("ignore",)``` (https://i.sstatic.net/fzKv5gM6.png) -
Django app static files recently started returning 404s, deployed by Heroku
The static files in my Django production app recently started returning 404s. Screenshot of production site with dev tools open Context This project has been deployed without issue for several years. I have not pushed changes since September. I am unsure when the 404s began. The staging version of my Heroku app loads the static assets Screenshot of staging site with dev tools open Investigation I read the most recent Whitenoise documentation; my app still follows their setup guidance. You can see my settings here (n.b., the project is open source). I also ran heroku run python manage.py collectstatic --app APP_NAME directly. I am aware of this related post, too: Heroku static files not loading, Django -
Django Rest Framework ListAPIView user permissions - Cant seem to get them working
I have a Django project with DjangoRestFramework. I have a simple view, Facility, which is a ListAPIView. Permissions were generated for add, change, delete and view. I have create a new user, and have assigned him no permissions. Yet he is able to call GET on facility. class FacilityListView(ListAPIView): queryset = Facility.objects.all() serializer_class = FacilitySerializer permission_classes = [IsAuthenticated, DjangoModelPermissions] def get(self, request): self.check_permissions(request) facilities = Facility.objects.all() serializer = FacilitySerializer(facilities, many=True) return Response(serializer.data) If I test user permissions, I get an empty list. perms = list(user.get_all_permissions()) If I check whether the permission exists, I get the Facility model as result a = Permission.objects.get(codename='view_facility') However, if I check which permissions are required for Facility, I also get an empty list. p = perm.get_required_permissions('GET', Facility) The model is as basic as it can be from django.db import models class Facility(models.Model): name = models.CharField(max_length=200) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.name This is what it says in my settings, and I have no custom permissions classes or anything. REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'API.authentication.JWTAuthenticationFromCookie', ), 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', 'rest_framework.permissions.DjangoModelPermissions', ], } Unfortunately, I have not been able to find an answer to my problem. If anyone has any idea, that would be greatly appreciated! -
Django gunicorn gevent with Statsig - run code in forked process
I am running Django app with gunicorn gevent workers. I'm using Statsig for Feature Flagging. It appears to be struggling, I assume due to gevents monkey patching. I was hoping I could get around this by running Statsig post app start up, specifically only in the forked processes, not the main process. It does init() but then never updates it's internal cache - so my Feature Gates always return the first value they got, and never fetch new ones. Has anyone had any similar issues at all? -
Django Materialized View Refreshing Celery Task is freezing in DB layer for larger dataset
In my application, written in Django 5.2, I want to aggregate static data and store it as a materialized view in PostgreSQL 16.10. I can create it as my own migration or using the django-materialized-view library. I have no problems creating it. However, when I call the celery task, which should refresh the view after updating the data, it “freezes” when I enable updates for all three carriers. On the other hand, if I remove the third carrier (whose data weight is approximately 95% of the total of the three), the refresh task runs without any problems. I could blame this on the giant size of the data, but if I run the update only for this giant carrier or write the refresh command myself in DBMS, it executes successfully in 20-30 seconds. The Celery worker that performs update and refresh tasks has concurrency=1 (the refresh task is, of course, the last in the queue), and the configuration of work_mem, maintanence_work_mem, and shared_buffers in the database should definitely be able to handle this task. During the update, no other queries are executed in the database, and the refresh is CONCURRENTLY. You can find my project in the GitHub repository: text … -
Django unicorn asgi concurrency performan issue
We are running Django with uvicorn asgi in kubernetes. Following best practice guides we are doing this with only 1 worker, and allowing the cluster to scale our pods up/down. We chose asgi as we wanted to be async ready, however currently our endpoints are all sync. Internally we are using our own Auth (micro service) which is a request to an internal pod using Pythons request library. This works via a JWT being passed up which we validate against our public keys then fetch User details/permissions. After this, it's all just ORM operations: a couple of .get() and some .create() When I hit our endpoint with 1 user this flies through at like 20-50ms. However as soon as we bump this up 2-5 Users, the whole thing comes to a grinding halt. And the requests start taking up to 3-5s. Using profiling tools we can see there's odd gaps of nothing between the internal Auth request finishing and then going on to do the next function. And similar in other areas. To me this seems to be simply a concurrency issue. Our 1 pod has 1 uvicorn worker and can only deal with 1 request. But why would they … -
Create question with options from same endpoint
so i am making a backend system using DRF in Django, this is my first project in django and drj, i am using django purely as a rest backend i am making a Quiz/mcq application this is from my questions app , models.py from django.db import models from classifications.models import SubSubCategory class Question(models.Model): ANSWER_TYPES = [ ('single', 'Single Correct'), ('multiple', 'Multiple Correct'), ] text = models.TextField() answer_type = models.CharField(max_length=10, choices=ANSWER_TYPES, default='single') difficulty = models.CharField( max_length=10, choices=[('easy', 'Easy'), ('medium', 'Medium'), ('hard', 'Hard')], default='medium' ) explanation = models.TextField(blank=True, null=True) subsubcategories = models.ManyToManyField(SubSubCategory, related_name='questions', blank=True) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return 'question' class Meta: ordering = ['-created_at'] def correct_options(self): return self.options.filter(is_correct=True) def incorrect_options(self): return self.options.filter(is_correct=False) class Option(models.Model): question = models.ForeignKey(Question, related_name='options', on_delete=models.CASCADE) label = models.CharField(max_length=5) text = models.TextField() is_correct = models.BooleanField(default=False) def __str__(self): return "options" and i am using Model viewset with router, but here when i try to create question , i am having to request in two different endpoint , one for creating question and another for creating options for questions views.py from rest_framework import viewsets from .models import Question, Option from .serializers import QuestionSerializer, OptionSerializer from core.permissions import IsAdminOrReadOnlyForAuthenticated from django.db.models import Q class OptionViewSet(viewsets.ModelViewSet): queryset = Option.objects.all() serializer_class = … -
Django difference between aware datetimes across DST
I'm working on a Django application in which I need to calculate the difference between timestamps stored in the DB. This week I run into some problems related to DST. In particular in the following code snippet: tEndUtc = tEnd.astimezone(timezone.utc) tStartUtc = tStart.astimezone(timezone.utc) total_timeUTC = tEndUtc- tStartUtc total_time = tEnd - tStart total_time (which uses the timezone aware timestamp stored in the DB) is shorter of 1 hour than the one with the total_timeUTC. I use have USE_TZ = true in the settings file. Here's what I get: tStart = datetime.datetime(2025, 10, 24, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tEnd = datetime.datetime(2025, 10, 31, 23, 59, 59, 999999, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tStartUtc = datetime.datetime(2025, 10, 23, 22, 0, tzinfo=datetime.timezone.utc) tEndUtc = datetime.datetime(2025, 10, 31, 22, 59, 59, 999999, tzinfo=datetime.timezone.utc) total_timeUTC = datetime.timedelta(days=8, seconds=3599, microseconds=999999) total_time = datetime.timedelta(days=7, seconds=86399, microseconds=999999) What is the correct way to handle DST? And in particular how does someone correctly calculate time difference across DST? The correct time delta is the one I get when using UTC. Having all the application built using timezone aware datetimes, I would like not change everything and convert to UTC timestamps. Thanks in advance. -
can i use get_or_create() function in django to assign a global variable?
i am an intern in a company and we are using django as framework and i was working on two part register system which admin make the initial Register and a link send via sms to user so user could complete the register,i know my code is bad i have a feeling to use get_or_create function to assign global variable but i'm afraid to break this(i use git but i still scared) class RegisterSerializer(serializers.ModelSerializer): """Class for registering users with multiple groups.""" # is_superuser = serializers.BooleanField(default=False, required=False, write_only=True) class Meta: fields = [ "national_code", "phone_number", ] model = User extra_kwargs = { "national_code": {"write_only": True, "validators": []}, "phone_number": {"write_only": True, "validators": []}, } def validate(self, attrs): if not attrs.get("national_code"): raise serializers.ValidationError(_("National code is required.")) if not attrs.get("phone_number"): raise serializers.ValidationError(_("Phone number is required.")) if User.objects.filter( phone_number=attrs.get("phone_number"), national_code=attrs.get("national_code"), is_complete=True, ).exists(): raise serializers.ValidationError(_("user already exists")) # if User.objects.filter(phone_number=attrs.get("phone_number")).exists(): # raise serializers.ValidationError(_("Phone number already exist.")) return attrs def create(self, validated_data): phone_number = validated_data["phone_number"] national_code = validated_data["national_code"] user, created = User.objects.get_or_create( phone_number=phone_number, national_code=national_code, defaults={"is_complete": False} ) token = RegisterToken.for_user(user) try: Sms.sendSMS( phone_number, f"{str(settings.DOMAIN_NAME)}/api/accounts/complete-register/?token={str(token)}", ) # do not delete this part soon or later we will use this # Sms.SendRegisterLink( # phone_number, # [ # { # … -
Django Celery Beat SQS slow scheduling
Beat seems to be sending the messages into SQS very slowly, about 100/minute. Every Sunday I have a sendout to about 16k users, and they're all booked for 6.30pm. Beat starts picking it up at the expected time, and I would expect a huge spike in messages coming into SQS at that time, but it takes its time, and I can see on the logs that the "Sending tasks x..." goes on for a few hours. I expect ~16k messages to go out around 6.30pm, and for the number of messages processed and deleted to pick up as the autoscale sets in. I have autoscaling on for my Celery workers, but because the number of messages doesn't really ever spike, the workers don't really scale until later, when the messages start backing up a bit. I'm really puzzled by this behaviour, anyone there know what I could be missing? I'm running celery with, some cron tab tasks but this one task in specific is a PeriodicTask celery_beat: celery -A appname beat --loglevel=INFO -
Django Mongodb Backend not creating collections and indexes
Summary Running Django migrations against our MongoDB database does not create MongoDB collections or indexes as defined in our app. The command completes without errors, but no collections or indexes are provisioned in MongoDB. Environment Django: 5.2.5 django-mongodb-backend: 5.2.2 Python: 3.11.14 Database setup: PostgreSQL as default, MongoDB as secondary via django-mongodb-backend Steps to Reproduce Configure DATABASES with a mongodb alias (see snippet below). Implement models that should live in MongoDB and include indexes/constraints. Implement a database router that routes models with use_db = "mongodb" to the mongodb DB. Run: python manage.py makemigrations mailbot_search_agent python manage.py migrate mailbot_search_agent --database=mongodb Expected MongoDB collections are created for the models that declare use_db = "mongodb". Declared indexes and unique constraints are created. If supported by backend, custom Atlas Search/Vector index definitions are applied. Actual migrate --database=mongodb completes, but: Collections are not created (or get created only after first write). Indexes defined in migrations (0002) and in model Meta/indexes are not present in MongoDB. Atlas Search/Vector indexes (declared via backend-provided Index classes) are not created. DATABASES Configuration (snippets) MONGO_CONNECTION_STRING = os.environ.get("MONGO_CONNECTION_STRING") MONGO_DB_NAME = os.environ.get("MONGO_DB_NAME", "execfn") DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": "execfn", "USER": "execfn_user", "PASSWORD": os.environ.get("DJANGO_DB_PASSWORD"), "HOST": "localhost", "PORT": "5432", }, "mongodb": { … -
How to aggregate hierarchical data efficiently in Django without causing N+1 queries?
I’m working with a hierarchical model structure in Django, where each level can represent a region, district, or village. The structure looks like this: class Location(models.Model): name = models.CharField(max_length=255) parent = models.ForeignKey( 'self', on_delete=models.CASCADE, related_name='children', null=True, blank=True ) def __str__(self): return self.name Each Location can have child locations (for example: Region → District → Village). I also have a model that connects each location to a measurement point: class LocationPoint(models.Model): location = models.ForeignKey(Location, on_delete=models.CASCADE) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) And a model that stores daily or hourly measurement values: import uuid class Value(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) volume = models.FloatField(default=0) timestamp = models.DateTimeField() Goal: I want to aggregate values (e.g., total volume) for each top-level region, including all nested child levels (districts, villages, etc.). Example: Region A → Total Volume: 10,000 Region B → Total Volume: 20,000 Problem: When I try to calculate these sums recursively (looping over children and summing their related Value records), the number of database queries increases dramatically — a classic N+1 query problem. Question: How can I efficiently compute aggregated values across a hierarchical model in Django — for example, summing all Value.volume fields for every descendant location — … -
Deploy Django and Nginx under subpath
I'm trying to deploy a Django app with Gunicorn and Nginx under a subpath, I'm inside a corporate network, and the path www.example.com/myapp points to the IP 192.168.192.77:8080 of my PC on the local network (I have no control over the pathing nor the corporate network, just that port exposed to the internet through /myapp). I tried many things including this: How to host a Django project in a subpath? , but it doesn't show the Django welcome page, just the Nginx welcome page. I also can't access to the Django admin page that should be on the path /myapp/admin, just a 404 page. This is the config of my site on the folder sites-available for Nginx: server { listen 8080; server_name 192.168.192.77; location /myapp/static/ { root /home/user/myapp; } location /myapp/ { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } } I tried proxy_set_header SCRIPT_NAME /myapp; but it didn't work. If I don't configure any paths, it shows the django welcome page at /myapp but then I can't acces /myapp/admin, also a 404. Curiously, if I start the Django development server using python manage.py runserver without nginx it works, the django welcome page shows at /myapp and I can access /myapp/admin with the … -
Django transaction.atomic() on single operation prevents race conditions?
Why I need to use atomic() when I have only 1 db operation inside atomic block? My AI-assistant tells me that it prevents race conditions, but I don't use select_for_update() inside. It tells that db looks on unique constraints and sets lock automatically but only when I use atomic(), but if I will use it without atomic() race conditions can be happened. Is it true? Can you explain this behaviour? I don't understand how it works if I have only one db operation inside. Code example: with atomic(): Model.objects.create(....) -
can't find xgettext or msguniq but gettext-base is installed
As part of a django project, I need to build translation *.po files, but I have the error CommandError: Can't find xgettext. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -a and CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -l en. I see that what is missing is supposed to come from the os and I run Ubuntu 25.04. So I tried to run xgettext and msguniq on their own. Each time I get Command 'xgettext' not found, but can be installed with: sudo apt install gettext So I tried doing just that but apt fails with Error: Unable to locate package gettext. However when I try to run gettext -V I do have gettext v.0.23.1 installed. It seems to come from package gettext-base that is indeed installed but can't seem to be used. I searched this over the internet but can't seem to find anything helpful. I don't know if it is necessary but I do have python-gettext installed in my python venv also. Any idea how to make python find gettext in this situation? -
Encoding full payload and decoding in server in REST
Issue WAF is showing some errors due to including some HTML tags in my payload responses (mostly field-like messages and user guides). Sometimes, I am also sending R programming language code to the server, which will just be stored in the database. While doing WAF for security check, it gives a vulnerability issue saying HTML tags and code are detected. My current Solution So, our team proposed a solution to encode the entire payload and decode the encoded payload in the Django middleware. But I am wondering if this is the best approach after all? Validation and Question Will this approach be efficient in the long run? If you have faced same issue, can you please suggest the right approach? Thank You -
Deploying Dockerized (React + Django + PostgreSQL ) app with custom license to a client without exposing source code
I am running a test simulation on a virtual server in VirtualBox to see how the procedure of installing a web application using Docker would work on a client server. My stack includes: Frontend: React.js, built into a Docker image Backend: Django (Python) in Docker Database: PostgreSQL 16 in Docker Orchestration: Docker Compose managing all services Environment variables: Managed via .env.docker for the backend (database credentials, email settings, etc.) and for the frontend at build time (API URL) License: A custom license mechanism I implemented myself, which must be included and validated on the client server using license.json as the key sold to clients In my test: I built the backend and frontend Docker images locally on my development machine. For the frontend, I rebuilt the image with REACT_APP_API_URL=http://localhost:8000 so that it points to the local backend. I exported the backend and frontend images as .tar files to simulate distribution to a client server. On the client server (virtual machine), I loaded the images and tried running them using Docker Compose. I observed that if the frontend API URL is not baked in at build time, React requests go to undefined/users/.... Question: For a real client deployment using this stack, … -
How to Avoid JWT Collision While Receiving Bearer Token
I am doing a Django project where I am using JWT token for authentication. But the problem is that two different JWT tokens are both valid with the same signature that is provided in the backend with slight variation. What is the reason? I also tried the implementation in FastAPI using PyJWT the result was kind a same where two different tokens were accepted by the backend server. Valid Token from Backend With c at the end Other Forged Correct Tokens With d at the end With e at the end Other Forged Incorrect Tokens With b at the end With g at the end -
Can't insert rows into Supabase profile table even after creating the RLS policy to do so for the sign up feature
Again, I am quite new to Supabase so I apologize in advance if I don't provide clear details in this post or mess up with some terms or something Basically, I am doing auth using Supabase and have this table called "profiles" with columns: id - UUID username - text email - text now when I create a new account using Supabase, it works, the account gets registered and shows up in the auth tab, but the new row doesn't get inserted into profiles? user = response.user if user: resp = supabase.table("profiles").insert({ "id": user.id, "username": username, "email": email }).execute() print(resp) request.session["user_id"] = user.id request.session["username"] = username return redirect("home") Now, my RLS for the profiles table is: Enable insert for authenticated users only, INSERT, anon, authenticated and I am using a service key to create the supabase client. Even after all that, I keep getting the error -> APIError: {'message': 'new row violates row-level security policy for table "profiles"', 'code': '42501', ...} PLEASE HELP ME I HAVE NO IDEA HOW TO FIX THIS, I almost let AI take over my code atp but nahh I'm not that desperate 💔 -
Is it possible to force mysql server authentication using django.db.backends.mysql?
it's my first question on stack overflow because I can't find relevant information in Django documentation. Is it possible to force mysql server authentication with ssl using django.db.backends.mysql? I have checked its implementation in Django Github and it seems it supports only 3 ssl arguments: ca, cert and key. What I need is equivalent of --ssl-mode=VERIFY_IDENTITY. Has anyone found some workaround for this problem? Here is my current configuration. TLS channel is working as expected, but identity of MySQL server is not validated. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': env('DB_NAME'), 'USER': env('DB_USER'), 'PASSWORD': env('DB_PASSWORD'), 'HOST': env('DB_HOST'), 'PORT': env('DB_PORT'), 'CONN_MAX_AGE': 600, 'OPTIONS':{ 'ssl':{ 'ca': env('CA_CERT'), 'cert': env('CERT'), 'key': env('KEY') } } } } -
How to reuse a Django model for multiple relationships
I want to make a task model and a user model. And I want each task to be able to be related to 3 users. Each task should be related to a creator user, an assignee user, and a verifier user. And I want to only have one user table. My inclination is to have 3 foreign keys on the task table: creator_id, assignee_id, and verifier_id. Is this the correct way to do it? How do I model that in Django?