Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Failed to start gunicorn.socket: Unit gunicorn.socket has a bad unit file setting
** [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=evanys Group=www-data WorkingDirectory=/home/evanys/www/plus ExecStart=/home/evanys/www/django22/bin/gunicorn \ --access-logfile - \ --workers 3 \ --bind unix:/home/evanys/plus.sock \ plus.wsgi:application [Install] WantedBy=multi-user.target ** Hola al momento de ejecutar *>sudo systemctl start gunicorn.socket* sale el error *>Failed to start gunicorn.socket: Unit gunicorn.socket has a bad unit file setting.* agradecere alguna solucion -
How to create a Django custom field with different representations in admin, database, and Python code?
I want to create a custom model field in Django that can store values in bytes in the database but allows users to interact with it in gigabytes (GB) in the Django admin interface. The goal is for the field to handle all necessary conversions seamlessly. Specifically, the field should accept input in GB when adding or editing a record in the admin, convert it to bytes before saving it to the database, and then retrieve it in bytes for use in Python code. I’ve started working on this by overriding methods like to_python, get_prep_value, from_db_value, and formfield, but I’m not entirely sure how to structure these methods to ensure the field behaves as intended. Here is what I already have: class GigabyteField(models.BigIntegerField): def to_python(self, value): if value is None: return value try: return int(value) except (TypeError, ValueError): raise ValueError(f"Invalid value for GigabyteField: {value}") def get_prep_value(self, value): if value is None: return None return int(value * (1024**3)) def from_db_value(self, value, *args): if value is None: return value return int(value / (1024**3)) -
linux ssh key is available to wsl2 django launched by Windows PyCharm
I am working in a development environment for my Django application where I have PyCharm installed on windows, and I launch django in wsl2. This has been working for me flawlessly for a while now. However just now I have a new need to have an ssh key for 3rd party command line tool my application invokes. I created the key in linux and I can execute the command successfully in a linux terminal, however when my django application runs the command fails with the same error it failed with prior to creating my ssh key. At one point I was 100% confident I had found a workaround of starting ssh-agent and then adding an environment variable to my run configuration to set SSH_AUTH_SOCK to the value of SSH_AUTH_SOCK from my terminal, however over the course of trying to find an actual acceptable solution, I can no longer reproduce this, so I'm unsure if that ever even worked or it worked for another reason that I was not aware of. I can confirm that in this scenario my SSH_AUTH_SOCK does have the correct value in django. Every time the command requiring the ssh key is executed I get an error … -
django s2forms.ModelSelect2Widget not works properly
hi all I’m trying using ModelSelect2Widget I set redis server which I test and it works. then I set the following project: models.py class Doctor(models.Model): user=models.OneToOneField(User,on_delete=models.CASCADE) status=models.BooleanField(default=True) def __str__(self): return "{} ({})".format(self.user.first_name,self.department) class Patient(models.Model): user=models.OneToOneField(User,on_delete=models.CASCADE) assignedDoctorId = models.ForeignKey(Doctor, on_delete=models.CASCADE,related_name='doctor_assigned') admitDate=models.DateField(auto_now=True) status=models.BooleanField(default=False) def __str__(self): return self.user.first_name form.py class BaseAutocompleteSelect(s2forms.ModelSelect2Widget): class Media: js = ("admin/js/vendor/jquery/jquery.min.js",) def __init__(self, **kwargs): super().__init__(kwargs) self.attrs = {"style": "width: 300px"} def build_attrs(self, base_attrs, extra_attrs=None): base_attrs = super().build_attrs(base_attrs, extra_attrs) base_attrs.update( {"data-minimum-input-length": 10, "data-placeholder": self.empty_label} ) return base_attrs class DoctorAutocompleteWidget(BaseAutocompleteSelect): empty_label = "-- select doctor --" search_fields = ("username__icontains",) queryset=models.Doctor.objects.all().filter(status=True).order_by("id") class PatientForm(forms.ModelForm): assignedDoctorId=forms.ModelChoiceField(queryset=models.Doctor.objects.all().filter(status=True), widget=DoctorAutocompleteWidget) but results is an empty list enter image description here while using assignedDoctorId=forms.ModelChoiceField(queryset=models.Doctor.objects.all().filter(status=True),empty_label="Name and Department") it show me list but I would like use select2 in order to user redis and the search bar I would like create select and multiselect menu with searchbar to change list value: in the future I would like the same with table list and change a dropdown menu option if user insert a string in an input module or select an option from one other dropdown menu -
I got 'IncompleteSignature' issue in Aliexpress open platform
I am working over Django using python. I am working on the official SDK for python of Aliexpress. I am trying to get the ACCESS TOKEN from Aliexpress. I got 'IncompleteSignature' issue which means 'The request signature does not conform to platform standards as the part of the response body. Here is full results : {'error_response': {'type': 'ISV', 'code': 'IncompleteSignature', 'msg': 'The request signature does not conform to platform standards', 'request_id': '2141154c17373626146733360'}} My code is very simple because I referred the sample code of their site. (https://openservice.aliexpress.com/doc/api.htm#/api?cid=3&path=/auth/token/create&methodType=GET/POST) Here is my code: def callback_handler(request): code = request.GET.get('code') url = "https://api-sg.aliexpress.com/sync" appkey = "123456" appSecret = "1234567890XXXX" client = iop.IopClient( url, appkey, appSecret, ) request = iop.IopRequest('/auth/token/create') request.add_api_param('code', code) response = client.execute(request) response_type = response.type response_body = response.body print(response_type) print(response_body) return HttpResponse(f"Response type: {response_type}, Response body: {response_body}") I posted the question to the Aliexpress console but they replied with a very vague answer and a Java script reference code. I was shocked. It was not even python. And this advice can be only implemented if I modified their SDK itself. I am not sure if the python SDK is of practical working quality. Since I have been spending too much time and … -
How to Include a Message Field in Structlog Logs and Best Practices for ElasticSearch Integration
I'm working on a Django project where logging is critical, and I'm using structlog to format and manage logs. The plan is to send these logs to ElasticSearch. However, I've encountered an issue: the logs are missing the "message" field, even though I explicitly pass a message in the logger call. Here’s the log output I currently get: { "code": 200, "request": "POST /api/push-notifications/subscribe/", "event": "request_finished", "ip": "127.0.0.1", "request_id": "d0edd77d-d68b-49d8-9d0d-87ee6ff723bf", "user_id": "98c78a2d-57f1-4caa-8b2a-8f5c4e295f95", "timestamp": "2025-01-21T10:40:43.233334Z", "logger": "django_structlog.middlewares.request", "level": "info" } What I want is to include the "message" field, for example: { "code": 200, "request": "POST /api/push-notifications/subscribe/", "event": "request_finished", "ip": "127.0.0.1", "request_id": "d0edd77d-d68b-49d8-9d0d-87ee6ff723bf", "user_id": "98c78a2d-57f1-4caa-8b2a-8f5c4e295f95", "timestamp": "2025-01-21T10:40:43.233334Z", "logger": "django_structlog.middlewares.request", "level": "info", "message": "push notification subscribed successfully" } Here’s my current setup: settings.py Logger Configuration LOGGING = { 'version': 1, 'disable_existing_loggers': False, "formatters": { "json_formatter": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.JSONRenderer(), }, "plain_console": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.dev.ConsoleRenderer(), }, "key_value": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.KeyValueRenderer(key_order=['timestamp', 'level', 'event', 'message']), }, }, 'handlers': { "console": { "class": "logging.StreamHandler", "formatter": "plain_console", }, "json_file": { "level": "INFO", "class": "logging.handlers.RotatingFileHandler", "filename": "logs/ft_json.log", "formatter": "json_formatter", "maxBytes": 1024 * 1024 * 5, "backupCount": 3, }, "flat_line_file": { "level": "INFO", "class": "logging.handlers.RotatingFileHandler", "filename": "logs/flat_line.log", "formatter": "key_value", "maxBytes": 1024 * 1024 * … -
Having problem sending data between 2 scope classes in Django Channels
I am using django channels for the first time and can't wrap my head around something. Here is what I am trying to achieve; I want to create a new message in ChatConsumer which is all good and fine. Problem occurs when i try to pass the id of the chat that new message was created in. I don't get any errors or feedback or nothing. It just fails silently. Here is the code base class ChatConsumer(WebsocketConsumer): """ On initial request, validate user before allowing connection to be accepted """ #on intial request def connect(self): self.room_name = self.scope['url_route']['kwargs']['room_name'] self.room_group_name = f'chat_{self.room_name}' async_to_sync(self.channel_layer.group_add)( self.room_group_name, self.channel_name ) # accept connection self.accept() # send response self.send(text_data=json.dumps({ 'type':'connection_established', 'message':'You are connected' })) def receive(self, text_data): # get data sent from front end text_data_json = json.loads(text_data) # message message = str(text_data_json['form']['message']) if message.strip() == "": message = None # try to decode image or set to none if not available try: base64_image = text_data_json['form']['image'] if base64_image.startswith('data:image'): base64_image = base64_image.split(';base64,')[1] img_name = random.randint(1111111111111,999999999999999) data = ContentFile(base64.b64decode(base64_image), name= 'image' + str(img_name) + '.jpg') except AttributeError: data = None # send message sender = self.scope['user'] # extract chat ID chat_id = int(self.scope['url_route']['kwargs']['room_name']) try: _chat = Chat.objects.get(id = chat_id) … -
Override existing custom Django App template tags
I have an application that uses Weblate to manage translations. I use weblate/weblate Docker image, with my own customizations built as a separate Python package extending this image and built on top of it. The problem is that in the Weblate HTML templates there is an icon template tag that is supposed to load SVG icons from a STATIC_ROOT or a CACHE_DIR location - but my application runs in a serverless setup and as such offloads all of the static resources to a S3 bucket. For most of the resources it works fine, but due to that template tag logic the icons are not loaded and I get these error messages - weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,913: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/weblate.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,918: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/wrench.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,919: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/plus.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,923: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/dots.svg' I wrote my custom … -
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow)
I am making a website with django on vscode. I want to add a field containing images and other files. I did some research and it needs django-anchor installed. I installed it but got an error. Collecting django-anchor Using cached django_anchor-0.5.0-py3-none-any.whl.metadata (6.9 kB) Requirement already satisfied: django<6,>=4.2 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django-anchor) (5.1.4) Collecting pillow<12,>=9.5 (from django-anchor) Using cached pillow-11.1.0.tar.gz (46.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: asgiref<4,>=3.8.1 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (3.8.1) Requirement already satisfied: sqlparse>=0.3.1 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (0.5.3) Requirement already satisfied: tzdata in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (2024.2) Requirement already satisfied: typing-extensions>=4 in c:\msys64\ucrt64\lib\python3.10\site-packages (from asgiref<4,>=3.8.1->django<6,>=4.2->django-anchor) (4.12.2) Using cached django_anchor-0.5.0-py3-none-any.whl (7.6 kB) Building wheels for collected packages: pillow Building wheel for pillow (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pillow (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [209 lines of output] running bdist_wheel running build running build_py creating build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BdfFontFile.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BlpImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BmpImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BufrStubImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\ContainerIO.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\CurImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\DcxImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\DdsImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\EpsImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\ExifTags.py -> … -
User Manytomany field add data got error maximum recursion depth exceeded while calling a Python object
I have this conversation model which have manytomany field to user model. Issue is when i try add participants in conversation i got error = RecursionError: maximum recursion depth exceeded class Conversation(BaseModel): name = models.CharField(max_length=255, blank=True, null=True) is_group = models.BooleanField(default=False) participants = models.ManyToManyField(User, related_name="conversations", null=True) class Meta: db_table = "conversation" i have this mixins.py class DestroyWithPayloadMixin(object): def destroy(self, *args, **kwargs): super().destroy(*args, **kwargs) return response.Response( { "msg": "Record deleted successfully" }, status=status.HTTP_200_OK ) class ModelDiffMixin(object): """ A model mixin that tracks model fields' values and provide some useful api to know what fields have been changed. """ def __init__(self, *args, **kwargs): super(ModelDiffMixin, self).__init__(*args, **kwargs) self.__initial = self.to_dict @property def diff(self): d1 = self.__initial d2 = self._dict diff_dict = {key: {'previous': value, 'current': d2[key]} for key, value in d1.items() if value != d2[key]} return diff_dict @property def changed_fields(self): return self.diff.keys() def get_field_diff(self, field_name): """ Returns a diff for field if it's changed and None otherwise. """ return self.diff.get(field_name, None) @property def to_dict(self): return model_to_dict(self, fields=[field.name for field in self._meta.fields]) -
How to Set Up Google Cloud ADC (Application Default Credentials) in Django on PythonAnywhere?
I'm trying to set up Google Cloud's Application Default Credentials (ADC) for my Django project on PythonAnywhere, but I keep encountering the following error: Error creating story: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information. What I've Tried: Created a Service Account: Created a service account in Google Cloud and downloaded the JSON key file. Stored the file at: /home/footageflow/helloworld2003-754c20cfa98d.json. Set the GOOGLE_APPLICATION_CREDENTIALS Environment Variable: Added the following to .bashrc export GOOGLE_APPLICATION_CREDENTIALS="/home/footageflow/helloworld2003-754c20cfa98d.json" Tried Programmatic Credentials: Explicitly set the variable in my Django code import os os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/home/footageflow/helloworld2003-754c20cfa98d.json" Attempted CLI Authentication: Installed the gcloud CLI on PythonAnywhere. Ran gcloud auth application-default login and authenticated successfully. enter code here Problem: Despite trying all these steps, the error persists when I run the code on PythonAnywhere. The same code works fine locally after authenticating with gcloud. My Questions: Is there something specific I need to configure for ADC to work on PythonAnywhere? Do I need to grant additional permissions to my service account in Google Cloud? Could the issue be related to how PythonAnywhere handles environment variables or service accounts? Additional Information: The Django project is running on PythonAnywhere. Locally, the project works … -
Django ORM filter on datetime understands entries with a 'Z' offset, but not '+0000' offset
I have a MySQL database that mirrors a salesforce database. My table has a column for "createddate" that is contains ISO-8601 datetime entries. Some of the entries have an offset of "Z" and some of the entries have an offset of "+0000" like this: 2025-01-20T17:18:18.000Z 2025-01-20T18:11:10.000Z 2025-01-20T17:27:55.000+0000 2025-01-20T17:29:46.000Z 2025-01-20T17:28:19.000+0000 When I attempt to filter on a certain date, the filter ONLY returns lines that have a "Z" offset. lines with "+0000" are not returned. My filter code looks like: receipts = Receipt.objects.filter(createddate__date='2025-01-20').count() As far as I can tell, both formats conform to ISO-8601. I do have USE_TZ set to true in my settings.py the field is configured in models.py like: createddate = models.CharField(db_column='CreatedDate', max_length=28, blank=True, null=True) Relatively new to django and its ORM, I'm currently working around with a raw SQL query but I'd rather do it natively if possible. -
Duplicate the content of the block in Django template
Suppose I have the following code in base.html: <meta name="description" content="{% block description %}Some description{% endblock %}"> Other templates that extend base.html, override the contents of the block and so each page gets its own description. What if I want to add <meta property="og:description" content="" />, where the value of the content should equal the above value? How can I add the contents of the block into another place? -
Javascript display raw JSON instead of rendering the HTML content like the "index" button
I'm developing a Django web application that allows users to follow each other and view a feed of posts from the users they follow. I have a button that triggers an AJAX request to a Django view designed to fetch and render the relevant posts. My Django view correctly retrieves the posts from users the currently logged-in user follows and returns them as a JSON response. However, when my JavaScript code receives this response, it doesn't render the posts into the page as HTML. Instead, the browser displays the raw JSON data. I expect the JavaScript to use the JSON data to update the page with rendered HTML content representing the posts, but this isn't happening. How can I ensure that my JavaScript correctly processes the JSON and renders the posts as HTML? thanks for helping ❤️ post.js: document.addEventListener('DOMContentLoaded', function() { // Use buttons to toggle between views const index = document.querySelector('#index') const following = document.querySelector('#following') if(index) { index.addEventListener('click', () => load_data('index')); } if (following) { following.addEventListener('click', () => load_data("following")); } // By default, load the index load_data('index'); }); function load_data(type) { console.log("type:", type); let url; if (type == "index") { url = "/post" } else if (type == "following") … -
How paginator active?
1 with bootstrap 5.3 when browsing not get active the paginator in the page attach imagen enter image description here I want to navigator this activated for each page. For example imagen enter image description here My Code <nav aria-label="Page navigation example"> <ul class="pagination"> {% if tipomaterial.has_previous %} <li class="page-item"><a class="page-link" href="?page={{ tipomaterial.previous_page_number }}">&laquo;</a></li> {% endif %} {% for page_number in tipomaterial.paginator.page_range %} {% if items_page.number == page_number %} <a class="page-link" href="?page={{ tipomaterial.page_number }}"> {{ page.number }} </a> {% else %} <li class="page-item" aria-current="page"> <a class="page-link" href="?page={{ tipomaterial.page_number }}"> {{ page.number }} </a> </li> {% endif %} {% endfor %} {% if tipomaterial.has_next %} <li class="page-item"><a class="page-link" href="?page={{ tipomaterial.next_page_number }}">&raquo;</a></li> {% endif %} </ul> </nav> -
s3 upload timeout on dockerized Digital Ocean setup
I have a S3 compatible storage and a Droplet server on Digital Ocean. The dockerized Django app I am running is trying to sync static assets to the storage. This fails from the Droplet server/Docker container, but not when accessing the same S3 storage from my local setup. I can also test the upload straight from the server (outside the dockerized app) and this works, too. So something about the Docker setup is making the S3 requests fail. I made a simple test case, in s3upload.py with foobar.txt present in the same directory: from boto3.s3.transfer import S3Transfer import boto3 import logging logging.getLogger().setLevel(logging.DEBUG) client = boto3.client('s3', aws_access_key_id="…", aws_secret_access_key="…", region_name="ams3", endpoint_url="https://ams3.digitaloceanspaces.com") transfer = S3Transfer(client) bucket_name = "…" transfer.upload_file("foobar.txt", bucket_name, "foobar.txt") The error I am seeing when calling this from the docker container is: Traceback (most recent call last): File "/usr/local/lib/python3.13/site-packages/boto3/s3/transfer.py", line 372, in upload_file future.result() ~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/futures.py", line 103, in result return self._coordinator.result() ~~~~~~~~~~~~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/futures.py", line 264, in result raise self._exception File "/usr/local/lib/python3.13/site-packages/s3transfer/tasks.py", line 135, in __call__ return self._execute_main(kwargs) ~~~~~~~~~~~~~~~~~~^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/tasks.py", line 158, in _execute_main return_value = self._main(**kwargs) File "/usr/local/lib/python3.13/site-packages/s3transfer/upload.py", line 796, in _main client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/botocore/client.py", line 569, in _api_call return self._make_api_call(operation_name, kwargs) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ … -
Django HTTP response always sets `sessionid` cookie and session data do not persist
I have created a custom backend and related middleware which log users in on the sole condition that an ID_TOKEN cookie is passed along with the request (authentication is done by AWS Cognito + Lambda Edge, managed by an AWS CouldFront). My code is extensively based on django.contrib.auth.backends.RemoteUserBackend and its related middleware middleware django.contrib.auth.middleware.RemoteUserMiddleware. While dealing with custom session data is working fine both locally and in a Docker container using runserver + unit tests do pass, I lose all session data in production (code running in a container on AWS ECS) from one request/response to another. From what I can see in my Firefox network tab, a set-cookie header is always sent with the HTTP response, causing session data to be lost. I guess they must be flushed as well on the back-end side (sessions use database store, production is running on gunicorn). I have set SESSION_COOKIE_SECURE = True in production but it did not solve the issue. Moreover, using django_extensions and its runserver_plus with an auto-generated certificate to use HTTPS locally as well did not allow me to reproduce the issue. Here is one set-cookie example: set-cookie sessionid=rlc...tn; expires=Mon, 03 Feb 2025 14:29:53 GMT; HttpOnly; Max-Age=1209600; Path=/; SameSite=Lax; … -
Django DB Connection Pool Shared Across Workers?
In Django, are DB connections in the Psycopg 3 DB Connection Pool shared between gevent gunicorn workers or does each worker spawn its own DB connection pool? https://docs.djangoproject.com/en/5.1/ref/databases/#connection-pool -
RUN pip install --no-cache-dir -r requirements.txt installing but no working with Docker
I've trying to use docker for a couple of projects, one is a Django and another is a python telegram bot; But in both cases the problem is that no matter how I copy or install requirements.txt into the container, the libraries apparently get installed, but then all of a sudden I get errors like this in the main python container: telegram-bot-container | File "/app/run.py", line 15, in <module> telegram-bot-container | import logging, mysql_handler, cmc_handler, constants telegram-bot-container | File "/app/mysql_handler.py", line 2, in <module> telegram-bot-container | from decouple import config telegram-bot-container | ModuleNotFoundError: No module named 'decouple' And I have to install all missing libraries like this, as if requirements.txt was redundant!: pip install python-telegram-bot mysql-connector-python python-coinmarketcap python-decouple Please help me identify the problem. My whole Dockerfile: FROM python:3.10-slim WORKDIR /app COPY ./requirements.txt /app/ RUN python -m pip install --upgrade pip && \ pip install --no-cache-dir -r requirements.txt || echo "Skipping problematic package." && \ pip install python-telegram-bot mysql-connector-python python-coinmarketcap COPY . /app EXPOSE 8081 CMD ["python", "run.py" ] I tried rebuilding with/without caching. I can see that the packages are being installed in logs. -
Stripe 404 with DJStripe and Django
I am running a django app using stripe for payments. Upon running stripe cli I am encountering 404 errors when triggering payment events. project level urls.py path('api/payments/', include('payments_app.urls')) payments_app level urls.py: path('stripe/', include('djstripe.urls', namespace='djstripe')) I am consistently encountering the following errors: 2025-01-19 22:04:48 --> customer.created [evt_1QjCEGKCdat1JCnURBfuIFLH] 2025-01-19 22:04:48 <-- [404] POST http://localhost:8000/api/payments/djstripe/ [evt_1QjCEGKCdat1JCnURBfuIFLH] I can assure you that the API keys have been set correctly as I was able to successfully sync from the products and prices. I ran many permutations of the following url with changing the urlpatterns stripe listen --forward-to http://localhost:8000/api/payments/stripe/webhook I tried running: curl -X POST http://localhost:8000/payments/stripe/webhook -H "Content-Type: application/json" -d '{}' It also give a 404 -
Django refuses connection on AWS instance
I have a django app which is close to the default install app. In settings.py I have DEBUG = False ALLOWED_HOSTS = ['*'] I have added a url server/get_something which I can request and returns fine when I am running the server locally using python manage.py runserver I have installed and run my app on an AWS instance using port 7500, and I have opened that port to all addresses in the AWS security group like this: However when I make my request from a remote computer (to the AWS instance) I get a "refused to connect" error. There is no relevant printed output on the django process. It is worth noting that if I try to connect to a different port, it times out, so I think that the request is getting past AWS' firewall. But I can't work out why it isn't getting to django on the instance. Also, I have used curl locally on the AWS instance: curl 127.0.0.1:7500/server/get_something And this works fine. -
ModuleNotFoundError: No module named 'wagtail.contrib.modeladmin' when i try to write "wagtail.contrib.modeladmin' on base.py file it rise this error
from django.contrib import admin from .models import Subscribers @admin.register(Subscribers) class SubscriberAdmin(admin.ModelAdmin): """Admin configuration for Subscribers.""" model = Subscribers menu_label = "Subscribers" menu_icon = "placeholder" menu_order = 290 add_to_settings_menu = False exclude_from_explorer = False list_display = ("email", "full_name",) search_fields = ("email", "full_name",) this was my admin.py file -
Django 5.0.x async with raw query set & iterator
I have a raw query set that returns a few millions of elements. Actually, I use the iterator() function of django to improve performances. The problem I have here, is that I want to make the function where the query is called asynchronous. Because of that, I can't make Django raw queryset anymore because I get this error : django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. Using sync_to_async makes the use of iterator impossible. There is the aiterator() function, but it doesn't work on raw query set.. How can I use iterator with a raw query set in an asynchronous context ? Code : def _get_elements_to_process(self): return Elements.objects.raw( """ My query """, ) async def fetch_data(self): for element in _get_elements_to_process().iterator(): # make some asynchronous action -
I can't create a new url on django
i've watched some tutorials on how to create a new simple url and for some reason it doesn't work, it looks like i didnt registred any urls even though i did. i created an app called 'Login' and registered it in the "INSTALLED APPS" list on settings.py of django : INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'Login.apps.LoginConfig', ] then i created the function called 'home' on the views file of the app, to show a phrase on the page: from django.shortcuts import render from django.http import HttpResponse def home(request): return HttpResponse('Hello') and then i created a new url path on the urls.py file of the project, without a name at first, just (""): from django.contrib import admin from django.urls import path from Login.views import home urlpatterns = [ path('admin/', admin.site.urls), path('', views.home), ] and when i use the (python manage.py runserver) command it just got back to the "success install" page of django, the one with the little rocket. Then I tried using a name for the url ("home/"): from django.contrib import admin from django.urls import path from Login import views urlpatterns = [ path('admin/', admin.site.urls), path('home/', views.home) ] but when i access the (localhost:8000/home/) it says … -
python package `drf-comments` not being recognized
i am developing a rest-based django project. i want to implement a commenting system in my project; but it has to be decoupled from the other apps. therefore, deepseek suggested to use drf-comments in my project. the whole thing seems appealing; as it does not require writing any model, view, serializer and url. deepseek just told me to add the urls coming from the drf-comments package and i did so. everything looks fine; but when i attempt to run the command python manage.py makemigrations or the migrate command, i get the error: ModuleNotFoundError: No module named 'drf_comments' and i got back to my chatbots (deepseek and blackbox) and they told me to go back, delete-recreate the venv to make sure everything works fine; but i know it is fine. does anybody know what's the problem with this python package (drf-comments)? maybe it is not supported anymore. additional information: python version: Python 3.11.4 the pip list: asgiref 3.8.1 certifi 2024.12.14 cffi 1.17.1 charset-normalizer 3.4.1 defusedxml 0.8.0rc2 Django 5.1.4 djangorestframework 3.15.2 djangorestframework_simplejwt 5.4.0 djoser 2.3.1 drf-comments 1.2.1 drf-nested-routers 0.94.1 idna 3.10 oauthlib 3.2.2 pip 24.3.1 pycparser 2.22 PyJWT 2.10.1 python3-openid 3.2.0 requests 2.32.3 requests-oauthlib 2.0.0 setuptools 65.5.0 social-auth-app-django 5.4.2 social-auth-core 4.5.4 sqlparse …