feat(ansible.awx & stable.diffusion.webui & gitlab): add new microservices definitions
add ansible awx (uncompleted becouse this shit invokeig docker contianers via ansibe instead docker-compose (WTF)). add stable.diffusion.webui for generating images via AI (self hosted solution). gitlabmaster
parent
89294b06b9
commit
96ab03f36c
|
|
@ -1,2 +1,4 @@
|
||||||
volumes/
|
volumes/
|
||||||
.env
|
.env
|
||||||
|
*.tar
|
||||||
|
*.gz
|
||||||
|
|
|
||||||
|
|
@ -26,8 +26,7 @@ you can use docker / docker-compose with sudo but you can add your user to ```do
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo groupadd docker
|
sudo groupadd docker
|
||||||
sudo usermod -aG docker $USER
|
sudo usermod -aG docker $USER && newgrp docker
|
||||||
newgrp docker
|
|
||||||
```
|
```
|
||||||
|
|
||||||
is likely you need rebot for save permission changes.
|
is likely you need rebot for save permission changes.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,89 @@
|
||||||
|
# Ansible AWX (Tower)
|
||||||
|
|
||||||
|
before installing - you need some ansible for integrate everything related with ansible awx.
|
||||||
|
|
||||||
|
get new pip package manager version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://bootstrap.pypa.io/get-pip.py | python
|
||||||
|
```
|
||||||
|
|
||||||
|
install ansible:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install ansible
|
||||||
|
```
|
||||||
|
|
||||||
|
on void linux, just install via ```xbps-install``` package manager:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
xbps-install -S ansible
|
||||||
|
```
|
||||||
|
|
||||||
|
## Installing
|
||||||
|
|
||||||
|
it's not a simple installation with ```docker-compose``` commands only.
|
||||||
|
|
||||||
|
for first step, just download interesting version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://github.com/ansible/awx/archive/refs/tags/17.1.0.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the AWX is downloaded, unzip the downloaded file with the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tar -xvf 17.1.0.tar.gz
|
||||||
|
cd awx-17.1.0/installer
|
||||||
|
```
|
||||||
|
|
||||||
|
generate hash:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "sakdaslnaksdlln" | openssl dgst -sha256
|
||||||
|
```
|
||||||
|
|
||||||
|
lets edit all of these records in ```inventory``` file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
admin_user=admin
|
||||||
|
admin_password=admin
|
||||||
|
secret_key=<generated_before_sha256_hash>
|
||||||
|
```
|
||||||
|
|
||||||
|
let's install awx (if you user is not a ```docker``` group member - use sudo):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo ansible-playbook -i inventory install.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
if you have any problem like that:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'requests'
|
||||||
|
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker>=5.0.0 (Python >= 3.6) or docker<5.0.0 (Python 2.7)) on void.node.00's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` (Python >= 3.6) or `pip install docker==4.4.4` (Python 2.7). The error was: No module named 'requests'"}
|
||||||
|
```
|
||||||
|
|
||||||
|
just install ```python3-docker``` package (https://stackoverflow.com/questions/59384708/ansible-returns-with-failed-to-import-the-required-python-library-docker-sdk-f):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo xbps-install -S python3-docker
|
||||||
|
```
|
||||||
|
|
||||||
|
and some python packages (https://stackoverflow.com/questions/50151210/ansible-unable-to-run-docker-compose-in-an-ansible-playbook):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip uninstall docker docker-py docker-compose
|
||||||
|
pip install docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo xbps-remove -of ansible
|
||||||
|
sudo xbps-install -S ansible
|
||||||
|
```
|
||||||
|
|
||||||
|
if this problem still exist - just use this (https://stackoverflow.com/questions/50151210/ansible-unable-to-run-docker-compose-in-an-ansible-playbook - second answer):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo pip install docker-compose
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,29 @@
|
||||||
|
[run]
|
||||||
|
source = awx
|
||||||
|
branch = True
|
||||||
|
omit =
|
||||||
|
awx/main/migrations/*
|
||||||
|
awx/lib/site-packages/*
|
||||||
|
|
||||||
|
[report]
|
||||||
|
# Regexes for lines to exclude from consideration
|
||||||
|
exclude_lines =
|
||||||
|
# Have to re-enable the standard pragma
|
||||||
|
pragma: no cover
|
||||||
|
|
||||||
|
# Don't complain about missing debug-only code:
|
||||||
|
def __repr__
|
||||||
|
if self\.debug
|
||||||
|
|
||||||
|
# Don't complain if tests don't hit defensive assertion code:
|
||||||
|
raise AssertionError
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
# Don't complain if non-runnable code isn't run:
|
||||||
|
if 0:
|
||||||
|
if __name__ == .__main__.:
|
||||||
|
|
||||||
|
ignore_errors = True
|
||||||
|
|
||||||
|
[xml]
|
||||||
|
output = ./reports/coverage.xml
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
awx/ui/node_modules
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
---
|
||||||
|
files:
|
||||||
|
awx/ui/:
|
||||||
|
labels: component:ui
|
||||||
|
maintainers: $team_ui
|
||||||
|
awx/api/:
|
||||||
|
labels: component:api
|
||||||
|
maintainers: $team_api
|
||||||
|
awx/main/:
|
||||||
|
labels: component:api
|
||||||
|
maintainers: $team_api
|
||||||
|
installer/:
|
||||||
|
labels: component:installer
|
||||||
|
|
||||||
|
macros:
|
||||||
|
team_api: wwitzel3 matburt chrismeyersfsu cchurch AlanCoding ryanpetrello rooftopcellist
|
||||||
|
team_ui: jlmitch5 jaredevantabor mabashian marshmalien benthomasson jakemcdermott
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
# Community Code of Conduct
|
||||||
|
|
||||||
|
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
|
||||||
|
|
@ -0,0 +1,49 @@
|
||||||
|
<!---
|
||||||
|
The Ansible community is highly committed to the security of our open source
|
||||||
|
projects. Security concerns should be reported directly by email to
|
||||||
|
security@ansible.com. For more information on the Ansible community's
|
||||||
|
practices regarding responsible disclosure, see
|
||||||
|
https://www.ansible.com/security
|
||||||
|
-->
|
||||||
|
|
||||||
|
##### ISSUE TYPE
|
||||||
|
<!--- Pick one below and delete the rest: -->
|
||||||
|
- Bug Report
|
||||||
|
- Feature Idea
|
||||||
|
- Documentation
|
||||||
|
|
||||||
|
##### COMPONENT NAME
|
||||||
|
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
|
||||||
|
- API
|
||||||
|
- UI
|
||||||
|
- Installer
|
||||||
|
|
||||||
|
##### SUMMARY
|
||||||
|
<!-- Briefly describe the problem. -->
|
||||||
|
|
||||||
|
##### ENVIRONMENT
|
||||||
|
* AWX version: X.Y.Z
|
||||||
|
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
|
||||||
|
* Ansible version: X.Y.Z
|
||||||
|
* Operating System:
|
||||||
|
* Web Browser:
|
||||||
|
|
||||||
|
##### STEPS TO REPRODUCE
|
||||||
|
|
||||||
|
<!-- For new features, show how the feature would be used. For bugs, please show
|
||||||
|
exactly how to reproduce the problem. Ideally, provide all steps and data needed
|
||||||
|
to recreate the bug from a new awx install. -->
|
||||||
|
|
||||||
|
##### EXPECTED RESULTS
|
||||||
|
|
||||||
|
<!-- For bug reports, what did you expect to happen when running the steps
|
||||||
|
above? -->
|
||||||
|
|
||||||
|
##### ACTUAL RESULTS
|
||||||
|
|
||||||
|
<!-- For bug reports, what actually happened? -->
|
||||||
|
|
||||||
|
##### ADDITIONAL INFORMATION
|
||||||
|
|
||||||
|
<!-- Include any links to sosreport, database dumps, screenshots or other
|
||||||
|
information. -->
|
||||||
|
|
@ -0,0 +1,41 @@
|
||||||
|
---
|
||||||
|
name: "\U0001F41B Bug report"
|
||||||
|
about: Create a report to help us improve
|
||||||
|
|
||||||
|
---
|
||||||
|
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
|
||||||
|
|
||||||
|
- http://webchat.freenode.net/?channels=ansible-awx
|
||||||
|
- https://groups.google.com/forum/#!forum/awx-project
|
||||||
|
|
||||||
|
We have to limit this because of limited volunteer time to respond to issues! -->
|
||||||
|
|
||||||
|
##### ISSUE TYPE
|
||||||
|
- Bug Report
|
||||||
|
|
||||||
|
##### SUMMARY
|
||||||
|
<!-- Briefly describe the problem. -->
|
||||||
|
|
||||||
|
##### ENVIRONMENT
|
||||||
|
* AWX version: X.Y.Z
|
||||||
|
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
|
||||||
|
* Ansible version: X.Y.Z
|
||||||
|
* Operating System:
|
||||||
|
* Web Browser:
|
||||||
|
|
||||||
|
##### STEPS TO REPRODUCE
|
||||||
|
|
||||||
|
<!-- Please describe exactly how to reproduce the problem. -->
|
||||||
|
|
||||||
|
##### EXPECTED RESULTS
|
||||||
|
|
||||||
|
<!-- What did you expect to happen when running the steps above? -->
|
||||||
|
|
||||||
|
##### ACTUAL RESULTS
|
||||||
|
|
||||||
|
<!-- What actually happened? -->
|
||||||
|
|
||||||
|
##### ADDITIONAL INFORMATION
|
||||||
|
|
||||||
|
<!-- Include any links to sosreport, database dumps, screenshots or other
|
||||||
|
information. -->
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
---
|
||||||
|
name: "✨ Feature request"
|
||||||
|
about: Suggest an idea for this project
|
||||||
|
|
||||||
|
---
|
||||||
|
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
|
||||||
|
|
||||||
|
- http://webchat.freenode.net/?channels=ansible-awx
|
||||||
|
- https://groups.google.com/forum/#!forum/awx-project
|
||||||
|
|
||||||
|
We have to limit this because of limited volunteer time to respond to issues! -->
|
||||||
|
|
||||||
|
##### ISSUE TYPE
|
||||||
|
- Feature Idea
|
||||||
|
|
||||||
|
##### SUMMARY
|
||||||
|
<!-- Briefly describe the problem or desired enhancement. -->
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
name: "\U0001F525 Security bug report"
|
||||||
|
about: How to report security vulnerabilities
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
For all security related bugs, email security@ansible.com instead of using this issue tracker and you will receive a prompt response.
|
||||||
|
|
||||||
|
For more information on the Ansible community's practices regarding responsible disclosure, see https://www.ansible.com/security
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
Bug Report: type:bug
|
||||||
|
Bugfix Pull Request: type:bug
|
||||||
|
Feature Request: type:enhancement
|
||||||
|
Feature Pull Request: type:enhancement
|
||||||
|
UI: component:ui
|
||||||
|
API: component:api
|
||||||
|
Installer: component:installer
|
||||||
|
Docs Pull Request: component:docs
|
||||||
|
Documentation: component:docs
|
||||||
|
|
@ -0,0 +1,39 @@
|
||||||
|
##### SUMMARY
|
||||||
|
<!--- Describe the change, including rationale and design decisions -->
|
||||||
|
|
||||||
|
<!---
|
||||||
|
If you are fixing an existing issue, please include "related #nnn" in your
|
||||||
|
commit message and your description; but you should still explain what
|
||||||
|
the change does.
|
||||||
|
-->
|
||||||
|
|
||||||
|
##### ISSUE TYPE
|
||||||
|
<!--- Pick one below and delete the rest: -->
|
||||||
|
- Feature Pull Request
|
||||||
|
- Bugfix Pull Request
|
||||||
|
- Docs Pull Request
|
||||||
|
|
||||||
|
##### COMPONENT NAME
|
||||||
|
<!--- Name of the module/plugin/module/task -->
|
||||||
|
- API
|
||||||
|
- UI
|
||||||
|
- Installer
|
||||||
|
|
||||||
|
##### AWX VERSION
|
||||||
|
<!--- Paste verbatim output from `make VERSION` between quotes below -->
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
##### ADDITIONAL INFORMATION
|
||||||
|
<!---
|
||||||
|
Include additional information to help people understand the change here.
|
||||||
|
For bugs that don't have a linked bug report, a step-by-step reproduction
|
||||||
|
of the problem is helpful.
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--- Paste verbatim command output below, e.g. before and after your change -->
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,151 @@
|
||||||
|
# Ignore generated schema
|
||||||
|
swagger.json
|
||||||
|
schema.json
|
||||||
|
reference-schema.json
|
||||||
|
|
||||||
|
# Tags
|
||||||
|
.tags
|
||||||
|
.tags1
|
||||||
|
|
||||||
|
# Tower
|
||||||
|
awx-dev
|
||||||
|
awx/settings/local_*.py*
|
||||||
|
awx/*.sqlite3
|
||||||
|
awx/*.sqlite3_*
|
||||||
|
awx/job_status
|
||||||
|
awx/projects
|
||||||
|
awx/job_output
|
||||||
|
awx/public/media
|
||||||
|
awx/public/static
|
||||||
|
awx/ui/tests/test-results.xml
|
||||||
|
awx/ui/client/src/local_settings.json
|
||||||
|
awx/main/fixtures
|
||||||
|
awx/*.log
|
||||||
|
tower/tower_warnings.log
|
||||||
|
celerybeat-schedule
|
||||||
|
awx/ui/static
|
||||||
|
awx/ui/build_test
|
||||||
|
awx/ui/client/languages
|
||||||
|
awx/ui/templates/ui/index.html
|
||||||
|
awx/ui/templates/ui/installing.html
|
||||||
|
awx/ui_next/node_modules/
|
||||||
|
awx/ui_next/src/locales/
|
||||||
|
awx/ui_next/coverage/
|
||||||
|
awx/ui_next/build
|
||||||
|
awx/ui_next/.env.local
|
||||||
|
awx/ui_next/instrumented
|
||||||
|
rsyslog.pid
|
||||||
|
tools/prometheus/data
|
||||||
|
tools/docker-compose/Dockerfile
|
||||||
|
|
||||||
|
# Tower setup playbook testing
|
||||||
|
setup/test/roles/postgresql
|
||||||
|
**/provision_docker
|
||||||
|
|
||||||
|
# Python & setuptools
|
||||||
|
__pycache__
|
||||||
|
/build
|
||||||
|
/deb-build
|
||||||
|
/reprepro
|
||||||
|
/rpm-build
|
||||||
|
/tar-build
|
||||||
|
/setup-bundle-build
|
||||||
|
/dist
|
||||||
|
/*.egg-info
|
||||||
|
*.py[c,o]
|
||||||
|
|
||||||
|
# JavaScript
|
||||||
|
/Gruntfile.js
|
||||||
|
/Brocfile.js
|
||||||
|
/bower.json
|
||||||
|
/package.json
|
||||||
|
/testem.yml
|
||||||
|
**/coverage
|
||||||
|
/.istanbul.yml
|
||||||
|
**/node_modules/**
|
||||||
|
/tmp
|
||||||
|
**/npm-debug.log*
|
||||||
|
**/package-lock.json
|
||||||
|
|
||||||
|
# UI build flag files
|
||||||
|
awx/ui/.deps_built
|
||||||
|
awx/ui/.release_built
|
||||||
|
awx/ui/.release_deps_built
|
||||||
|
|
||||||
|
# Testing
|
||||||
|
.cache
|
||||||
|
.coverage
|
||||||
|
.tox
|
||||||
|
coverage.xml
|
||||||
|
htmlcov
|
||||||
|
pep8.txt
|
||||||
|
scratch
|
||||||
|
testem.log
|
||||||
|
awx/awx_test.sqlite3-journal
|
||||||
|
.pytest_cache/
|
||||||
|
|
||||||
|
# Mac OS X
|
||||||
|
*.DS_Store
|
||||||
|
|
||||||
|
# Editors
|
||||||
|
*.sw[poj]
|
||||||
|
*~
|
||||||
|
|
||||||
|
# Vagrant
|
||||||
|
/Vagrantfile
|
||||||
|
tools/vagrant/local.yml
|
||||||
|
.vagrant*
|
||||||
|
|
||||||
|
# Virtualbox
|
||||||
|
ansible-tower-*-ova
|
||||||
|
ansible-tower-*.box
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
setup.log
|
||||||
|
backup.log
|
||||||
|
restore.log
|
||||||
|
setup/tower_setup_conf.yml
|
||||||
|
setup/setup.log
|
||||||
|
setup/inventory
|
||||||
|
tower-backup-*
|
||||||
|
|
||||||
|
# Ansible
|
||||||
|
**/*.retry
|
||||||
|
|
||||||
|
# Other
|
||||||
|
.tower_cycle
|
||||||
|
env/*
|
||||||
|
nohup.out
|
||||||
|
reports
|
||||||
|
*.bak
|
||||||
|
*.bak[0-9]
|
||||||
|
*.dot
|
||||||
|
*.log
|
||||||
|
*.log.[0-9]
|
||||||
|
*.results
|
||||||
|
local/
|
||||||
|
*.mo
|
||||||
|
requirements/vendor
|
||||||
|
.i18n_built
|
||||||
|
.idea/*
|
||||||
|
*credentials*.y*ml*
|
||||||
|
|
||||||
|
# AWX python libs populated by requirements.txt
|
||||||
|
awx/lib/.deps_built
|
||||||
|
awx/lib/site-packages
|
||||||
|
venv/*
|
||||||
|
use_dev_supervisor.txt
|
||||||
|
|
||||||
|
|
||||||
|
# Ansible module tests
|
||||||
|
/awx_collection_test_venv/
|
||||||
|
/awx_collection/*.tar.gz
|
||||||
|
/sanity/
|
||||||
|
/awx_collection_build/
|
||||||
|
|
||||||
|
.idea/*
|
||||||
|
*.unison.tmp
|
||||||
|
*.#
|
||||||
|
/tools/docker-compose/overrides/
|
||||||
|
/awx/ui_next/.ui-built
|
||||||
|
/Dockerfile
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
ignore: |
|
||||||
|
.tox
|
||||||
|
awx/main/tests/data/inventory/plugins/**
|
||||||
|
# vault files
|
||||||
|
awx/main/tests/data/ansible_utils/playbooks/valid/vault.yml
|
||||||
|
awx/ui/test/e2e/tests/smoke-vars.yml
|
||||||
|
|
||||||
|
extends: default
|
||||||
|
|
||||||
|
rules:
|
||||||
|
line-length: disable
|
||||||
|
|
@ -0,0 +1,85 @@
|
||||||
|
Coding Standards and Practices
|
||||||
|
==============================
|
||||||
|
|
||||||
|
This is not meant to be a style document so much as a practices document for ensuring performance and convention in the Ansible Tower API.
|
||||||
|
|
||||||
|
Paginate Everything
|
||||||
|
===================
|
||||||
|
|
||||||
|
Anything that returns a collection must be paginated.
|
||||||
|
|
||||||
|
Assume large data sets
|
||||||
|
======================
|
||||||
|
|
||||||
|
Don't test exclusively with small data. Assume 1000-10000 hosts in all operations, with years of event data.
|
||||||
|
|
||||||
|
Some of our users have 30,000 machines they manage.
|
||||||
|
|
||||||
|
API performance
|
||||||
|
===============
|
||||||
|
|
||||||
|
In general, the expected response time for any API call is something like 1/4 of a second or less. Signs of slow API
|
||||||
|
performance should be regularly checked, particularly for missing indexes.
|
||||||
|
|
||||||
|
Missing Indexes
|
||||||
|
===============
|
||||||
|
|
||||||
|
Any filters the UI uses should be indexed.
|
||||||
|
|
||||||
|
Migrations
|
||||||
|
==========
|
||||||
|
|
||||||
|
Always think about any existing data when adding any new fields. It's ok to wait in upgrade time to get the database to be
|
||||||
|
consistent.
|
||||||
|
|
||||||
|
Limit Queries
|
||||||
|
=============
|
||||||
|
|
||||||
|
The number of queries made should be constant time and must not vary with the size of the result set.
|
||||||
|
|
||||||
|
Consider RBAC
|
||||||
|
=============
|
||||||
|
|
||||||
|
The returns of all collections must be filtered by who has access to view them, without exception
|
||||||
|
|
||||||
|
Discoverability
|
||||||
|
===============
|
||||||
|
|
||||||
|
All API endpoints must be able to be traversed from "/", and have comments, where possible, explaining their purpose
|
||||||
|
|
||||||
|
Friendly Comments
|
||||||
|
=================
|
||||||
|
|
||||||
|
All API comments are exposed by the API browser and must be fit for customers. Avoid jokes in API comments and error
|
||||||
|
messages, as well as FIXME comments in places that the API will display.
|
||||||
|
|
||||||
|
UI Sanity
|
||||||
|
=========
|
||||||
|
|
||||||
|
Where possible the API should provide API endpoints that feed raw data into the UI, the UI should not have to do lots of
|
||||||
|
data transformations, as it is going to be less responsive and able to do these things.
|
||||||
|
|
||||||
|
When requiring a collection of times of size N, the UI must not make any extra API queries for each item in the result set
|
||||||
|
|
||||||
|
Effective Usage of Query Sets
|
||||||
|
=============================
|
||||||
|
|
||||||
|
The system must return Django result sets rather than building JSON in memory in nearly all cases. Use things like
|
||||||
|
exclude and joins, and let the database do the work.
|
||||||
|
|
||||||
|
Serializers
|
||||||
|
===========
|
||||||
|
|
||||||
|
No database queries may be made in serializers because these are executed once per item, rather than paginated.
|
||||||
|
|
||||||
|
REST verbs
|
||||||
|
==========
|
||||||
|
|
||||||
|
REST verbs should be RESTy. Don't use GETs to do things that should be a PUT or POST.
|
||||||
|
|
||||||
|
Unit tests
|
||||||
|
==========
|
||||||
|
|
||||||
|
Every URL/route must have unit test coverage. Consider both positive and negative tests.
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,385 @@
|
||||||
|
# Changelog
|
||||||
|
|
||||||
|
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
|
||||||
|
|
||||||
|
# 17.1.0 (March 9th, 2021)
|
||||||
|
- Addressed a security issue in AWX (CVE-2021-20253)
|
||||||
|
- Fixed a bug permissions error related to redis in K8S-based deployments: https://github.com/ansible/awx/issues/9401
|
||||||
|
|
||||||
|
# 17.0.1 (January 26, 2021)
|
||||||
|
- Fixed pgdocker directory permissions issue with Local Docker installer: https://github.com/ansible/awx/pull/9152
|
||||||
|
- Fixed a bug in the UI which caused toggle settings to not be changed when clicked: https://github.com/ansible/awx/pull/9093
|
||||||
|
|
||||||
|
# 17.0.0 (January 22, 2021)
|
||||||
|
- AWX now requires PostgreSQL 12 by default: https://github.com/ansible/awx/pull/8943
|
||||||
|
**Note:** users who encounter permissions errors at upgrade time should `chown -R ~/.awx/pgdocker` to ensure it's owned by the user running the install playbook
|
||||||
|
- Added support for region name for OpenStack inventory: https://github.com/ansible/awx/issues/5080
|
||||||
|
- Added the ability to chain undefined attributes in custom notification templates: https://github.com/ansible/awx/issues/8677
|
||||||
|
- Dramatically simplified the `image_build` role: https://github.com/ansible/awx/pull/8980
|
||||||
|
- Fixed a bug which can cause schema migrations to fail at install time: https://github.com/ansible/awx/issues/9077
|
||||||
|
- Fixed a bug which caused the `is_superuser` user property to be out of date in certain circumstances: https://github.com/ansible/awx/pull/8833
|
||||||
|
- Fixed a bug which sometimes results in race conditions on setting access: https://github.com/ansible/awx/pull/8580
|
||||||
|
- Fixed a bug which sometimes causes an unexpected delay in stdout for some playbooks: https://github.com/ansible/awx/issues/9085
|
||||||
|
- (UI) Added support for credential password prompting on job launch: https://github.com/ansible/awx/pull/9028
|
||||||
|
- (UI) Added the ability to configure LDAP settings in the UI: https://github.com/ansible/awx/issues/8291
|
||||||
|
- (UI) Added a sync button to the Project detail view: https://github.com/ansible/awx/issues/8847
|
||||||
|
- (UI) Added a form for configuring Google Outh 2.0 settings: https://github.com/ansible/awx/pull/8762
|
||||||
|
- (UI) Added searchable keys and related keys to the Credentials list: https://github.com/ansible/awx/issues/8603
|
||||||
|
- (UI) Added support for advanced search and copying to Notification Templates: https://github.com/ansible/awx/issues/7879
|
||||||
|
- (UI) Added support for prompting on workflow nodes: https://github.com/ansible/awx/issues/5913
|
||||||
|
- (UI) Added support for session timeouts: https://github.com/ansible/awx/pull/8250
|
||||||
|
- (UI) Fixed a bug that broke websocket streaming for the insecure ws:// protocol: https://github.com/ansible/awx/pull/8877
|
||||||
|
- (UI) Fixed a bug in the user interface when a translation for the browser's preferred locale isn't available: https://github.com/ansible/awx/issues/8884
|
||||||
|
- (UI) Fixed bug where navigating from one survey question form directly to another wasn't reloading the form: https://github.com/ansible/awx/issues/7522
|
||||||
|
- (UI) Fixed a bug which can cause an uncaught error while launching a Job Template: https://github.com/ansible/awx/issues/8936
|
||||||
|
- Updated autobahn to address CVE-2020-35678
|
||||||
|
|
||||||
|
## 16.0.0 (December 10, 2020)
|
||||||
|
- AWX now ships with a reimagined user interface. **Please read this before upgrading:** https://groups.google.com/g/awx-project/c/KuT5Ao92HWo
|
||||||
|
- Removed support for syncing inventory from Red Hat CloudForms - https://github.com/ansible/awx/commit/0b701b3b2
|
||||||
|
- Removed support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
|
||||||
|
- Upgraded NodeJS to actively maintained LTS 14.15.1 - https://github.com/ansible/awx/pull/8766
|
||||||
|
- Added Git-LFS to the default image build - https://github.com/ansible/awx/pull/8700
|
||||||
|
- Added the ability to specify `metadata.labels` in the podspec for container groups - https://github.com/ansible/awx/issues/8486
|
||||||
|
- Added support for Kubernetes pod annotations - https://github.com/ansible/awx/pull/8434
|
||||||
|
- Added the ability to label the web container in local Docker installs - https://github.com/ansible/awx/pull/8449
|
||||||
|
- Added additional metadata (as an extra var) to playbook runs to report the SCM branch name - https://github.com/ansible/awx/pull/8433
|
||||||
|
- Fixed a bug that caused k8s installations to fail due to an incorrect Helm repo - https://github.com/ansible/awx/issues/8715
|
||||||
|
- Fixed a bug that prevented certain Workflow Approval resources from being deleted - https://github.com/ansible/awx/pull/8612
|
||||||
|
- Fixed a bug that prevented the deletion of inventories stuck in "pending deletion" state - https://github.com/ansible/awx/issues/8525
|
||||||
|
- Fixed a display bug in webhook notifications with certain unicode characters - https://github.com/ansible/awx/issues/7400
|
||||||
|
- Improved support for exporting dependent objects (Inventory Hosts and Groups) in the `awx export` CLI tool - https://github.com/ansible/awx/commit/607bc0788
|
||||||
|
|
||||||
|
## 15.0.1 (October 20, 2020)
|
||||||
|
- Added several optimizations to improve performance for a variety of high-load simultaneous job launch use cases https://github.com/ansible/awx/pull/8403
|
||||||
|
- Added the ability to source roles and collections from requirements.yaml files (not just requirements.yml) - https://github.com/ansible/awx/issues/4540
|
||||||
|
- awx.awx collection modules now provide a clearer error message for incompatible versions of awxkit - https://github.com/ansible/awx/issues/8127
|
||||||
|
- Fixed a bug in notification messages that contain certain unicode characters - https://github.com/ansible/awx/issues/7400
|
||||||
|
- Fixed a bug that prevents the deletion of Workflow Approval records - https://github.com/ansible/awx/issues/8305
|
||||||
|
- Fixed a bug that broke the selection of webhook credentials - https://github.com/ansible/awx/issues/7892
|
||||||
|
- Fixed a bug which can cause confusing behavior for social auth logins across distinct browser tabs - https://github.com/ansible/awx/issues/8154
|
||||||
|
- Fixed several bugs in the output of Workflow Job Templates using the `awx export` tool - https://github.com/ansible/awx/issues/7798 https://github.com/ansible/awx/pull/7847
|
||||||
|
- Fixed a race condition that can lead to missing hosts when running parallel inventory syncs - https://github.com/ansible/awx/issues/5571
|
||||||
|
- Fixed an HTTP 500 error when certain LDAP group parameters aren't properly set - https://github.com/ansible/awx/issues/7622
|
||||||
|
- Updated a few dependencies in response to several CVEs:
|
||||||
|
* CVE-2020-7720
|
||||||
|
* CVE-2020-7743
|
||||||
|
* CVE-2020-7676
|
||||||
|
|
||||||
|
## 15.0.0 (September 30, 2020)
|
||||||
|
- Added improved support for fetching Ansible collections from private Galaxy content sources (such as https://github.com/ansible/galaxy_ng) - https://github.com/ansible/awx/issues/7813
|
||||||
|
**Note:** as part of this change, new Organizations created in the AWX API will _no longer_ automatically synchronize roles and collections from galaxy.ansible.com by default. More details on this change can be found at: https://github.com/ansible/awx/issues/8341#issuecomment-707310633
|
||||||
|
- AWX now utilizes a version of certifi that auto-discovers certificates in the system certificate store - https://github.com/ansible/awx/pull/8242
|
||||||
|
- Added support for arbitrary custom inventory plugin configuration: https://github.com/ansible/awx/issues/5150
|
||||||
|
- Added an optional setting to disable the auto-creation of organizations and teams on successful SAML login. - https://github.com/ansible/awx/pull/8069
|
||||||
|
- Added a number of optimizations to AWX's callback receiver to improve the speed of stdout processing for simultaneous playbooks runs - https://github.com/ansible/awx/pull/8193 https://github.com/ansible/awx/pull/8191
|
||||||
|
- Added the ability to use `!include` and `!import` constructors when constructing YAML for use with the AWX CLI - https://github.com/ansible/awx/issues/8135
|
||||||
|
- Fixed a bug that prevented certain users from being able to edit approval nodes in Workflows - https://github.com/ansible/awx/pull/8253
|
||||||
|
- Fixed a bug that broke password prompting for credentials in certain cases - https://github.com/ansible/awx/issues/8202
|
||||||
|
- Fixed a bug which can cause PostgreSQL deadlocks when running many parallel playbooks against large shared inventories - https://github.com/ansible/awx/issues/8145
|
||||||
|
- Fixed a bug which can cause delays in AWX's task manager when large numbers of simultaneous jobs are scheduled - https://github.com/ansible/awx/issues/7655
|
||||||
|
- Fixed a bug which can cause certain scheduled jobs - those that run every X minute(s) or hour(s) - to fail to run at the proper time - https://github.com/ansible/awx/issues/8071
|
||||||
|
- Fixed a performance issue for playbooks that store large amounts of data using the `set_stats` module - https://github.com/ansible/awx/issues/8006
|
||||||
|
- Fixed a bug related to AWX's handling of the auth_path argument for the HashiVault KeyValue credential plugin - https://github.com/ansible/awx/pull/7991
|
||||||
|
- Fixed a bug that broke support for Remote Archive SCM Type project syncs on platforms that utilize Python2 - https://github.com/ansible/awx/pull/8057
|
||||||
|
- Updated to the latest version of Django Rest Framework to address CVE-2020-25626
|
||||||
|
- Updated to the latest version of Django to address CVE-2020-24583 and CVE-2020-24584
|
||||||
|
- Updated to the latest verson of channels_redis to address a bug that slowly causes Daphne processes to leak memory over time - https://github.com/django/channels_redis/issues/212
|
||||||
|
|
||||||
|
## 14.1.0 (Aug 25, 2020)
|
||||||
|
- AWX images can now be built on ARM64 - https://github.com/ansible/awx/pull/7607
|
||||||
|
- Added the Remote Archive SCM Type to support using immutable artifacts and releases (such as tarballs and zip files) as projects - https://github.com/ansible/awx/issues/7954
|
||||||
|
- Deprecated official support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
|
||||||
|
- Added resource import/export support to the official AWX collection - https://github.com/ansible/awx/issues/7329
|
||||||
|
- Added the ability to import YAML-based resources (instead of just JSON) when using the AWX CLI - https://github.com/ansible/awx/pull/7808
|
||||||
|
- Users upgrading from older versions of AWX may encounter an issue that causes their postgres container to restart in a loop (https://github.com/ansible/awx/issues/7854) - if you encounter this, bring your containers down and then back up (e.g., `docker-compose down && docker-compose up -d`) after upgrading to 14.1.0.
|
||||||
|
- Updated the AWX CLI to export labels associated with Workflow Job Templates - https://github.com/ansible/awx/pull/7847
|
||||||
|
- Updated to the latest python-ldap to address a bug - https://github.com/ansible/awx/issues/7868
|
||||||
|
- Upgraded git-python to fix a bug that caused workflows to sometimes fail - https://github.com/ansible/awx/issues/6119
|
||||||
|
- Worked around a bug in the channels_redis library that slowly causes Daphne processes to leak memory over time - https://github.com/django/channels_redis/issues/212
|
||||||
|
- Fixed a bug in the AWX CLI that prevented Workflow nodes from importing properly - https://github.com/ansible/awx/issues/7793
|
||||||
|
- Fixed a bug in the awx.awx collection release process that templated the wrong version - https://github.com/ansible/awx/issues/7870
|
||||||
|
- Fixed a bug that caused errors rendering stdout that contained UTF-16 surrogate pairs - https://github.com/ansible/awx/pull/7918
|
||||||
|
|
||||||
|
## 14.0.0 (Aug 6, 2020)
|
||||||
|
- As part of our commitment to inclusivity in open source, we recently took some time to audit AWX's source code and user interface and replace certain terminology with more inclusive language. Strictly speaking, this isn't a bug or a feature, but we think it's important and worth calling attention to:
|
||||||
|
* https://github.com/ansible/awx/commit/78229f58715fbfbf88177e54031f532543b57acc
|
||||||
|
* https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language
|
||||||
|
- Installing roles and collections via requirements.yml as part of Project Updates now requires at least Ansible 2.9 - https://github.com/ansible/awx/issues/7769
|
||||||
|
- Deprecated the use of the `PRIMARY_GALAXY_USERNAME` and `PRIMARY_GALAXY_PASSWORD` settings. We recommend using tokens to access Galaxy or Automation Hub.
|
||||||
|
- Added local caching for downloaded roles and collections so they are not re-downloaded on nodes where they are up to date with the project - https://github.com/ansible/awx/issues/5518
|
||||||
|
- Added the ability to associate K8S/OpenShift credentials to Job Template for playbook interaction with the `community.kubernetes` collection - https://github.com/ansible/awx/issues/5735
|
||||||
|
- Added the ability to include HTML in the Custom Login Info presented on the login page - https://github.com/ansible/awx/issues/7600
|
||||||
|
- Fixed https://access.redhat.com/security/cve/cve-2020-14327 - Server-side request forgery on credentials
|
||||||
|
- Fixed https://access.redhat.com/security/cve/cve-2020-14328 - Server-side request forgery on webhooks
|
||||||
|
- Fixed https://access.redhat.com/security/cve/cve-2020-14329 - Sensitive data exposure on labels
|
||||||
|
- Fixed https://access.redhat.com/security/cve/cve-2020-14337 - Named URLs allow for testing the presence or absence of objects
|
||||||
|
- Fixed a number of bugs in the user interface related to an upgrade of jQuery:
|
||||||
|
* https://github.com/ansible/awx/issues/7530
|
||||||
|
* https://github.com/ansible/awx/issues/7546
|
||||||
|
* https://github.com/ansible/awx/issues/7534
|
||||||
|
* https://github.com/ansible/awx/issues/7606
|
||||||
|
- Fixed a bug that caused the `-f yaml` flag of the AWX CLI to not print properly formatted YAML - https://github.com/ansible/awx/issues/7795
|
||||||
|
- Fixed a bug in the installer that caused errors when `docker_registry_password` was set - https://github.com/ansible/awx/issues/7695
|
||||||
|
- Fixed a permissions error that prevented certain users from starting AWX services - https://github.com/ansible/awx/issues/7545
|
||||||
|
- Fixed a bug that allows superusers to run unsafe Jinja code when defining custom Credential Types - https://github.com/ansible/awx/pull/7584/
|
||||||
|
- Fixed a bug that prevented users from creating (or editing) custom Credential Types containing boolean fields - https://github.com/ansible/awx/issues/7483
|
||||||
|
- Fixed a bug that prevented users with postgres usernames containing uppercase letters from restoring backups succesfully - https://github.com/ansible/awx/pull/7519
|
||||||
|
- Fixed a bug which allowed the creation (in the Tower API) of Groups and Hosts with the same name - https://github.com/ansible/awx/issues/4680
|
||||||
|
|
||||||
|
## 13.0.0 (Jun 23, 2020)
|
||||||
|
- Added import and export commands to the official AWX CLI, replacing send and receive from the old tower-cli (https://github.com/ansible/awx/pull/6125).
|
||||||
|
- Removed scripts as a means of running inventory updates of built-in types (https://github.com/ansible/awx/pull/6911)
|
||||||
|
- Ansible 2.8 is now partially unsupported; some inventory source types are known to no longer work.
|
||||||
|
- Fixed an issue where the vmware inventory source ssl_verify source variable was not recognized (https://github.com/ansible/awx/pull/7360)
|
||||||
|
- Fixed a bug that caused redis' listen socket to have too-permissive file permissions (https://github.com/ansible/awx/pull/7317)
|
||||||
|
- Fixed a bug that caused rsyslogd's configuration file to have world-readable file permissions, potentially leaking secrets (CVE-2020-10782)
|
||||||
|
|
||||||
|
## 12.0.0 (Jun 9, 2020)
|
||||||
|
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
|
||||||
|
- Moved to a single container image build instead of separate awx_web and awx_task images. The container image is just `awx` (https://github.com/ansible/awx/pull/7228)
|
||||||
|
- Official AWX container image builds now use a two-stage container build process that notably reduces the size of our published images (https://github.com/ansible/awx/pull/7017)
|
||||||
|
- Removed support for HipChat notifications ([EoL announcement](https://www.atlassian.com/partnerships/slack/faq#faq-98b17ca3-247f-423b-9a78-70a91681eff0)); all previously-created HipChat notification templates will be deleted due to this removal.
|
||||||
|
- Fixed a bug which broke AWX installations with oc version 4.3 (https://github.com/ansible/awx/pull/6948/)
|
||||||
|
- Fixed a performance issue that caused notable delay of stdout processing for playbooks run against large numbers of hosts (https://github.com/ansible/awx/issues/6991)
|
||||||
|
- Fixed a bug that caused CyberArk AIM credential plugin looks to hang forever in some environments (https://github.com/ansible/awx/issues/6986)
|
||||||
|
- Fixed a bug that caused ANY/ALL converage settings not to properly save when editing approval nodes in the UI (https://github.com/ansible/awx/issues/6998)
|
||||||
|
- Fixed a bug that broke support for the satellite6_group_prefix source variable (https://github.com/ansible/awx/issues/7031)
|
||||||
|
- Fixed a bug that prevented changes to workflow node convergence settings when approval nodes were in use (https://github.com/ansible/awx/issues/7063)
|
||||||
|
- Fixed a bug that caused notifications to fail on newer version of Mattermost (https://github.com/ansible/awx/issues/7264)
|
||||||
|
- Fixed a bug (by upgrading to 0.8.1 of the foreman collection) that prevented host_filters from working properly with Foreman-based inventory (https://github.com/ansible/awx/issues/7225)
|
||||||
|
- Fixed a bug that prevented the usage of the Conjur credential plugin with secrets that contain spaces (https://github.com/ansible/awx/issues/7191)
|
||||||
|
- Fixed a bug in awx-manage run_wsbroadcast --status in kubernetes (https://github.com/ansible/awx/pull/7009)
|
||||||
|
- Fixed a bug that broke notification toggles for system jobs in the UI (https://github.com/ansible/awx/pull/7042)
|
||||||
|
- Fixed a bug that broke local pip installs of awxkit (https://github.com/ansible/awx/issues/7107)
|
||||||
|
- Fixed a bug that prevented PagerDuty notifications from sending for workflow job template approvals (https://github.com/ansible/awx/issues/7094)
|
||||||
|
- Fixed a bug that broke external log aggregation support for URL paths that include the = character (such as the tokens for SumoLogic) (https://github.com/ansible/awx/issues/7139)
|
||||||
|
- Fixed a bug that prevented organization admins from removing labels from workflow job templates (https://github.com/ansible/awx/pull/7143)
|
||||||
|
|
||||||
|
## 11.2.0 (Apr 29, 2020)
|
||||||
|
|
||||||
|
- Inventory updates now use collection-based plugins by default (in Ansible 2.9+):
|
||||||
|
- amazon.aws.aws_ec2
|
||||||
|
- community.vmware.vmware_vm_inventory
|
||||||
|
- azure.azcollection.azure_rm
|
||||||
|
- google.cloud.gcp_compute
|
||||||
|
- theforeman.foreman.foreman
|
||||||
|
- openstack.cloud.openstack
|
||||||
|
- ovirt.ovirt_collection.ovirt
|
||||||
|
- awx.awx.tower
|
||||||
|
- Added support for Approle and LDAP/AD mechanisms to the Hashicorp Vault credential plugin (https://github.com/ansible/awx/issues/5076)
|
||||||
|
- Added Project (Domain Name) support for the OpenStack Keystone v3 API (https://github.com/ansible/awx/issues/6831)
|
||||||
|
- Added a new setting for raising log verbosity for rsyslogd (https://github.com/ansible/awx/pull/6818)
|
||||||
|
- Added the ability to monitor stdout in the CLI for running jobs and workflow jobs (https://github.com/ansible/awx/issues/6165)
|
||||||
|
- Fixed a bug which prevented the AWX CLI from properly installing with newer versions of pip (https://github.com/ansible/awx/issues/6870)
|
||||||
|
- Fixed a bug which broke AWX's external logging support when configured with HTTPS endpoints that utilize self-signed certificates (https://github.com/ansible/awx/issues/6851)
|
||||||
|
- Fixed a local docker installer bug that mistakenly attempted to upgrade PostgreSQL when an external pg_hostname is specified (https://github.com/ansible/awx/pull/5398)
|
||||||
|
- Fixed a race condition that caused task container crashes when pods are quickly brought down and back up (https://github.com/ansible/awx/issues/6750)
|
||||||
|
- Fixed a bug that caused 404 errors when attempting to view the second page of the workflow approvals view (https://github.com/ansible/awx/issues/6803)
|
||||||
|
- Fixed a bug that prevented the use of ANSIBLE_SSH_ARGS for ad-hoc-commands (https://github.com/ansible/awx/pull/6811)
|
||||||
|
- Fixed a bug that broke AWX installs/upgrades on Red Hat OpenShift (https://github.com/ansible/awx/issues/6791)
|
||||||
|
|
||||||
|
|
||||||
|
## 11.1.0 (Apr 22, 2020)
|
||||||
|
- Changed rsyslogd to persist queued events to disk (to prevent a risk of out-of-memory errors) (https://github.com/ansible/awx/issues/6746)
|
||||||
|
- Added the ability to configure the destination and maximum disk size of rsyslogd spool (in the event of a log aggregator outage) (https://github.com/ansible/awx/pull/6763)
|
||||||
|
- Added the ability to discover playbooks in project clones from symlinked directories (https://github.com/ansible/awx/pull/6773)
|
||||||
|
- Fixed a bug that caused certain log aggregator settings to break logging integration (https://github.com/ansible/awx/issues/6760)
|
||||||
|
- Fixed a bug that caused playbook execution in container groups to sometimes unexpectedly deadlock (https://github.com/ansible/awx/issues/6692)
|
||||||
|
- Improved stability of the new redis clustering implementation (https://github.com/ansible/awx/pull/6739 https://github.com/ansible/awx/pull/6720)
|
||||||
|
- Improved stability of the new rsyslogd-based logging implementation (https://github.com/ansible/awx/pull/6796)
|
||||||
|
|
||||||
|
## 11.0.0 (Apr 16, 2020)
|
||||||
|
- As of AWX 11.0.0, Kubernetes-based deployments use a Deployment rather than a StatefulSet.
|
||||||
|
- Reimplemented external logging support using rsyslogd to improve reliability and address a number of issues (https://github.com/ansible/awx/issues/5155)
|
||||||
|
- Changed activity stream logs to include summary fields for related objects (https://github.com/ansible/awx/issues/1761)
|
||||||
|
- Added code to more gracefully attempt to reconnect to redis if it restarts/becomes unavailable (https://github.com/ansible/awx/pull/6670)
|
||||||
|
- Fixed a bug that caused REFRESH_TOKEN_EXPIRE_SECONDS to not properly be respected for OAuth2.0 refresh tokens generated by AWX (https://github.com/ansible/awx/issues/6630)
|
||||||
|
- Fixed a bug that broke schedules containing RRULES with very old DTSTART dates (https://github.com/ansible/awx/pull/6550)
|
||||||
|
- Fixed a bug that broke installs on older versions of Ansible packaged with certain Linux distributions (https://github.com/ansible/awx/issues/5501)
|
||||||
|
- Fixed a bug that caused the activity stream to sometimes report the incorrect actor when associating user membership on SAML login (https://github.com/ansible/awx/pull/6525)
|
||||||
|
- Fixed a bug in AWX's Grafana notification support when annotation tags are omitted (https://github.com/ansible/awx/issues/6580)
|
||||||
|
- Fixed a bug that prevented some users from searching for Source Control credentials in the AWX user interface (https://github.com/ansible/awx/issues/6600)
|
||||||
|
- Fixed a bug that prevented disassociating orphaned users from credentials (https://github.com/ansible/awx/pull/6554)
|
||||||
|
- Updated Twisted to address CVE-2020-10108 and CVE-2020-10109.
|
||||||
|
|
||||||
|
## 10.0.0 (Mar 30, 2020)
|
||||||
|
- As of AWX 10.0.0, the official AWX CLI no longer supports Python 2 (it requires at least Python 3.6) (https://github.com/ansible/awx/pull/6327)
|
||||||
|
- AWX no longer relies on RabbitMQ; Redis is added as a new dependency (https://github.com/ansible/awx/issues/5443)
|
||||||
|
- Altered AWX's event tables to allow more than ~2 billion total events (https://github.com/ansible/awx/issues/6010)
|
||||||
|
- Improved the performance (time to execute, and memory consumption) of the periodic job cleanup system job (https://github.com/ansible/awx/pull/6166)
|
||||||
|
- Updated Job Templates so they now have an explicit Organization field (it is no longer inferred from the associated Project) (https://github.com/ansible/awx/issues/3903)
|
||||||
|
- Updated social-auth-core to address an upcoming GitHub API deprecation (https://github.com/ansible/awx/issues/5970)
|
||||||
|
- Updated to ansible-runner 1.4.6 to address various bugs.
|
||||||
|
- Updated Django to address CVE-2020-9402
|
||||||
|
- Updated pyyaml version to address CVE-2017-18342
|
||||||
|
- Fixed a bug which prevented the new `scm_branch` field from being used in custom notification templates (https://github.com/ansible/awx/issues/6258)
|
||||||
|
- Fixed a race condition that sometimes causes success/failure notifications to include an incomplete list of hosts (https://github.com/ansible/awx/pull/6290)
|
||||||
|
- Fixed a bug that can cause certain setting pages to lose unsaved form edits when a playbook is launched (https://github.com/ansible/awx/issues/5265)
|
||||||
|
- Fixed a bug that can prevent the "Use TLS/SSL" field from properly saving when editing email notification templates (https://github.com/ansible/awx/issues/6383)
|
||||||
|
- Fixed a race condition that sometimes broke event/stdout processing for jobs launched in container groups (https://github.com/ansible/awx/issues/6280)
|
||||||
|
|
||||||
|
## 9.3.0 (Mar 12, 2020)
|
||||||
|
- Added the ability to specify an OAuth2 token description in the AWX CLI (https://github.com/ansible/awx/issues/6122)
|
||||||
|
- Added support for K8S service account annotations to the installer (https://github.com/ansible/awx/pull/6007)
|
||||||
|
- Added support for K8S imagePullSecrets to the installer (https://github.com/ansible/awx/pull/5989)
|
||||||
|
- Launching jobs (and workflows) using the --monitor flag in the AWX CLI now returns a non-zero exit code on job failure (https://github.com/ansible/awx/issues/5920)
|
||||||
|
- Improved UI performance for various job views when many simultaneous users are logged into AWX (https://github.com/ansible/awx/issues/5883)
|
||||||
|
- Updated to the latest version of Django to address a few open CVEs (https://github.com/ansible/awx/pull/6080)
|
||||||
|
- Fixed a critical bug which can cause AWX to hang and stop launching playbooks after a periodic of time (https://github.com/ansible/awx/issues/5617)
|
||||||
|
- Fixed a bug which caused delays in project update stdout for certain large SCM clones (as of Ansible 2.9+) (https://github.com/ansible/awx/pull/6254)
|
||||||
|
- Fixed a bug which caused certain smart inventory filters to mistakenly return duplicate hosts (https://github.com/ansible/awx/pull/5972)
|
||||||
|
- Fixed an unclear server error when creating smart inventories with the AWX collection (https://github.com/ansible/awx/issues/6250)
|
||||||
|
- Fixed a bug that broke Grafana notification support (https://github.com/ansible/awx/issues/6137)
|
||||||
|
- Fixed a UI bug which prevent users with read access to an organization from editing credentials for that organization (https://github.com/ansible/awx/pull/6241)
|
||||||
|
- Fixed a bug which prevent workflow approval records from recording a `started` and `elapsed` date (https://github.com/ansible/awx/issues/6202)
|
||||||
|
- Fixed a bug which caused workflow nodes to have a confusing option for `verbosity` (https://github.com/ansible/awx/issues/6196)
|
||||||
|
- Fixed an RBAC bug which prevented projects and inventory schedules from being created by certain users in certain contexts (https://github.com/ansible/awx/issues/5717)
|
||||||
|
- Fixed a bug that caused `role_path` in a project's config to not be respected due to an error processing `/etc/ansible/ansible.cfg` (https://github.com/ansible/awx/pull/6038)
|
||||||
|
- Fixed a bug that broke inventory updates for installs with custom home directories for the awx user (https://github.com/ansible/awx/pull/6152)
|
||||||
|
- Fixed a bug that broke fact data collection when AWX encounters invalid/unexpected fact data (https://github.com/ansible/awx/issues/5935)
|
||||||
|
|
||||||
|
|
||||||
|
## 9.2.0 (Feb 12, 2020)
|
||||||
|
- Added the ability to configure the convergence behavior of workflow nodes https://github.com/ansible/awx/issues/3054
|
||||||
|
- AWX now allows for a configurable global limit for fork count (per-job run). The default maximum is 200. https://github.com/ansible/awx/pull/5604
|
||||||
|
- Added the ability to specify AZURE_PUBLIC_CLOUD (for e.g., Azure Government KeyVault support) for the Azure credential plugin https://github.com/ansible/awx/issues/5138
|
||||||
|
- Added support for several additional parameters for Satellite dynamic inventory https://github.com/ansible/awx/pull/5598
|
||||||
|
- Added a new field to jobs for tracking the date/time a job is cancelled https://github.com/ansible/awx/pull/5610
|
||||||
|
- Made a series of additional optimizations to the callback receiver to further improve stdout write speed for running playbooks https://github.com/ansible/awx/pull/5677 https://github.com/ansible/awx/pull/5739
|
||||||
|
- Updated AWX to be compatible with Helm 3.x (https://github.com/ansible/awx/pull/5776)
|
||||||
|
- Optimized AWX's job dependency/scheduling code to drastically improve processing time in scenarios where there are many pending jobs scheduled simultaneously https://github.com/ansible/awx/issues/5154
|
||||||
|
- Fixed a bug which could cause SCM authentication details (basic auth passwords) to be reported to external loggers in certain failure scenarios (e.g., when a git clone fails and ansible itself prints an error message to stdout) https://github.com/ansible/awx/pull/5812
|
||||||
|
- Fixed a k8s installer bug that caused installs to fail in certain situations https://github.com/ansible/awx/issues/5574
|
||||||
|
- Fixed a number of issues that caused analytics gathering and reporting to run more often than necessary https://github.com/ansible/awx/pull/5721
|
||||||
|
- Fixed a bug in the AWX CLI that prevented JSON-type settings from saving properly https://github.com/ansible/awx/issues/5528
|
||||||
|
- Improved support for fetching custom virtualenv dependencies when AWX is installed behind a proxy https://github.com/ansible/awx/pull/5805
|
||||||
|
- Updated the bundled version of openstacksdk to address a known issue https://github.com/ansible/awx/issues/5821
|
||||||
|
- Updated the bundled vmware_inventory plugin to the latest version to address a bug https://github.com/ansible/awx/pull/5668
|
||||||
|
- Fixed a bug that can cause inventory updates to fail to properly save their output when run within a workflow https://github.com/ansible/awx/pull/5666
|
||||||
|
- Removed a number of pre-computed fields from the Host and Group models to improve AWX performance. As part of this change, inventory group UIs throughout the interface no longer display status icons https://github.com/ansible/awx/pull/5448
|
||||||
|
|
||||||
|
## 9.1.1 (Jan 14, 2020)
|
||||||
|
|
||||||
|
- Fixed a bug that caused database migrations on Kubernetes installs to hang https://github.com/ansible/awx/pull/5579
|
||||||
|
- Upgraded Python-level app dependencies in AWX virtual environment https://github.com/ansible/awx/pull/5407
|
||||||
|
- Running jobs no longer block associated inventory updates https://github.com/ansible/awx/pull/5519
|
||||||
|
- Fixed invalid_response SAML error https://github.com/ansible/awx/pull/5577
|
||||||
|
- Optimized the callback receiver to drastically improve the write speed of stdout for parallel jobs (https://github.com/ansible/awx/pull/5618)
|
||||||
|
|
||||||
|
## 9.1.0 (Dec 17, 2019)
|
||||||
|
- Added a command to generate a new SECRET_KEY and rekey the secrets in the database
|
||||||
|
- Removed project update locking when jobs using it are running
|
||||||
|
- Fixed slow queries for /api/v2/instances and /api/v2/instance_groups when smart inventories are used
|
||||||
|
- Fixed a partial password disclosure when special characters existed in the RabbitMQ password (CVE-2019-19342)
|
||||||
|
- Fixed hang in error handling for source control checkouts
|
||||||
|
- Fixed an error on subsequent job runs that override the branch of a project on an instance that did not have a prior project checkout
|
||||||
|
- Fixed an issue where jobs launched in isolated or container groups would incorrectly timeout
|
||||||
|
- Fixed an incorrect link to instance groups documentation in the user interface
|
||||||
|
- Fixed editing of inventory on Workflow templates
|
||||||
|
- Fixed multiple issues with OAuth2 token cleanup system jobs
|
||||||
|
- Fixed a bug that broke email notifications for workflow approval/deny https://github.com/ansible/awx/issues/5401
|
||||||
|
- Updated SAML implementation to automatically login if authorization already exists
|
||||||
|
- Updated AngularJS to 1.7.9 for CVE-2019-10768
|
||||||
|
|
||||||
|
## 9.0.1 (Nov 4, 2019)
|
||||||
|
|
||||||
|
- Fixed a bug in the installer that broke certain types of k8s installs https://github.com/ansible/awx/issues/5205
|
||||||
|
|
||||||
|
## 9.0.0 (Oct 31, 2019)
|
||||||
|
|
||||||
|
- Updated AWX images to use centos:8 as the parent image.
|
||||||
|
- Updated to ansible-runner 1.4.4 to address various bugs.
|
||||||
|
- Added oc and kubectl to the AWX images to support new container-based execution introduced in 8.0.0.
|
||||||
|
- Added some optimizations to speed up the deletion of large Inventory Groups.
|
||||||
|
- Fixed a bug that broke webhook launches for Job Templates that define a survey (https://github.com/ansible/awx/issues/5062).
|
||||||
|
- Fixed a bug in the CLI which incorrectly parsed launch time arguments for `awx job_templates launch` and `awx workflow_job_templates launch` (https://github.com/ansible/awx/issues/5093).
|
||||||
|
- Fixed a bug that caused inventory updates using "sourced from a project" to stop working (https://github.com/ansible/awx/issues/4750).
|
||||||
|
- Fixed a bug that caused Slack notifications to sometimes show the wrong bot avatar (https://github.com/ansible/awx/pull/5125).
|
||||||
|
- Fixed a bug that prevented the use of digits in AWX's URL settings (https://github.com/ansible/awx/issues/5081).
|
||||||
|
|
||||||
|
## 8.0.0 (Oct 21, 2019)
|
||||||
|
|
||||||
|
- The Ansible Tower Ansible modules have been migrated to a new official Ansible AWX collection: https://galaxy.ansible.com/awx/AWX
|
||||||
|
Please note that this functionality is only supported in Ansible 2.9+
|
||||||
|
- AWX now supports the ability to launch jobs from external webhooks (GitHub and GitLab integration are supported).
|
||||||
|
- AWX now supports Container Groups, a new feature that allows you to schedule and run playbooks on single-use kubernetes pods on-demand.
|
||||||
|
- AWX now supports sending notifications when Workflow steps are approved, denied, or time out.
|
||||||
|
- AWX now records the user who approved or denied Workflow steps.
|
||||||
|
- AWX now supports fetching Ansible Collections from private galaxy servers.
|
||||||
|
- AWX now checks the user's ansible.cfg for paths where role/collections may live when running project updates.
|
||||||
|
- AWX now uses PostgreSQL 10 by default.
|
||||||
|
- AWX now warns more loudly about underlying AMQP connectivity issues (https://github.com/ansible/awx/pull/4857).
|
||||||
|
- Added a few optimizations to drastically improve dashboard performance for larger AWX installs (installs with several hundred thousand jobs or more).
|
||||||
|
- Updated to the latest version of Ansible's VMWare inventory script (which adds support for vmware_guest_facts).
|
||||||
|
- Deprecated /api/v2/inventory_scripts/ (this endpoint - and the Custom Inventory Script feature - will be removed in a future release of AWX).
|
||||||
|
- Fixed a bug which prevented Organization Admins from removing users from their own Organization (https://github.com/ansible/awx/issues/2979)
|
||||||
|
- Fixed a bug which sometimes caused cluster nodes to fail to re-join with a cryptic error, "No instance found with the current cluster host id" (https://github.com/ansible/awx/issues/4294)
|
||||||
|
- Fixed a bug that prevented the use of launch-time passphrases when using credential plugins (https://github.com/ansible/awx/pull/4807)
|
||||||
|
- Fixed a bug that caused notifications assigned at the Organization level not to take effect for Workflows in that Organization (https://github.com/ansible/awx/issues/4712)
|
||||||
|
- Fixed a bug which caused a notable amount of CPU overhead on RabbitMQ health checks (https://github.com/ansible/awx/pull/5009)
|
||||||
|
- Fixed a bug which sometimes caused the <return> key to stop functioning in <textarea> elements (https://github.com/ansible/awx/issues/4192)
|
||||||
|
- Fixed a bug which caused request contention when the same OAuth2.0 token was used in multiple simultaneous requests (https://github.com/ansible/awx/issues/4694)
|
||||||
|
- Fixed a bug related to parsing multiple choice survey options (https://github.com/ansible/awx/issues/4452).
|
||||||
|
- Fixed a bug that caused single-sign-on icons on the login page to fail to render in certain Windows browsers (https://github.com/ansible/awx/issues/3924)
|
||||||
|
- Fixed a number of bugs that caused certain OAuth2 settings to not be properly respected, such as REFRESH_TOKEN_EXPIRE_SECONDS.
|
||||||
|
- Fixed a number of bugs in the AWX CLI, including a bug which sometimes caused long lines of stdout output to be unexpectedly truncated.
|
||||||
|
- Fixed a number of bugs on the job details UI which sometimes caused auto-scrolling stdout to become stuck.
|
||||||
|
- Fixed a bug which caused LDAP authentication to fail if the TLD of the server URL contained digits (https://github.com/ansible/awx/issues/3646)
|
||||||
|
- Fixed a bug which broke HashiCorp Vault integration on older versions of HashiCorp Vault.
|
||||||
|
|
||||||
|
## 7.0.0 (Sept 4, 2019)
|
||||||
|
|
||||||
|
- AWX now detects and installs Ansible Collections defined in your project (note - this feature only works in Ansible 2.9+) (https://github.com/ansible/awx/issues/2534)
|
||||||
|
- AWX now includes an official command line client. Keep an eye out for a follow-up email on this mailing list for information on how to install it and try it out.
|
||||||
|
- Added the ability to provide a specific SCM branch on jobs (https://github.com/ansible/awx/issues/282)
|
||||||
|
- Added support for Workflow Approval Nodes, a new feature which allows you to add "pause and wait for approval" steps into your workflows (https://github.com/ansible/awx/issues/1206)
|
||||||
|
- Added the ability to specify a specific HTTP method for webhook notifications (POST vs PUT) (https://github.com/ansible/awx/pull/4124)
|
||||||
|
- Added the ability to specify a username and password for HTTP Basic Authorization for webhook notifications (https://github.com/ansible/awx/pull/4124)
|
||||||
|
- Added support for customizing the text content of notifications (https://github.com/ansible/awx/issues/79)
|
||||||
|
- Added the ability to enable and disable hosts in dynamic inventory (https://github.com/ansible/awx/pull/4420)
|
||||||
|
- Added the description (if any) to the Job Template list (https://github.com/ansible/awx/issues/4359)
|
||||||
|
- Added new metrics for instance hostnames and pending jobs to the /api/v2/metrics/ endpoint (https://github.com/ansible/awx/pull/4375)
|
||||||
|
- Changed AWX's on/off toggle buttons to a non-text based style to simplify internationalization (https://github.com/ansible/awx/pull/4425)
|
||||||
|
- Events emitted by ansible for adhoc commands are now sent to the external log aggregrator (https://github.com/ansible/awx/issues/4545)
|
||||||
|
- Fixed a bug which allowed a user to make an organization credential in another organization without permissions to that organization (https://github.com/ansible/awx/pull/4483)
|
||||||
|
- Fixed a bug that caused `extra_vars` on workflows to break when edited (https://github.com/ansible/awx/issues/4293)
|
||||||
|
- Fixed a slow SQL query that caused performance issues when large numbers of groups exist (https://github.com/ansible/awx/issues/4461)
|
||||||
|
- Fixed a few minor bugs in survey field validation (https://github.com/ansible/awx/pull/4509) (https://github.com/ansible/awx/pull/4479)
|
||||||
|
- Fixed a bug that sometimes resulted in orphaned `ansible_runner_pi` directories in `/tmp` after playbook execution (https://github.com/ansible/awx/pull/4409)
|
||||||
|
- Fixed a bug that caused the `is_system_auditor` flag in LDAP configuration to not work (https://github.com/ansible/awx/pull/4396)
|
||||||
|
- Fixed a bug which caused schedules to disappear from the UI when toggled off (https://github.com/ansible/awx/pull/4378)
|
||||||
|
- Fixed a bug that sometimes caused stdout content to contain extraneous blank lines in newer versions of Ansible (https://github.com/ansible/awx/pull/4391)
|
||||||
|
- Updated to the latest Django security release, 2.2.4 (https://github.com/ansible/awx/pull/4410) (https://www.djangoproject.com/weblog/2019/aug/01/security-releases/)
|
||||||
|
- Updated the default version of git to a version that includes support for x509 certificates (https://github.com/ansible/awx/issues/4362)
|
||||||
|
- Removed the deprecated `credential` field from `/api/v2/workflow_job_templates/N/` (as part of the `/api/v1/` removal in prior AWX versions - https://github.com/ansible/awx/pull/4490).
|
||||||
|
|
||||||
|
## 6.1.0 (Jul 18, 2019)
|
||||||
|
|
||||||
|
- Updated AWX to use Django 2.2.2.
|
||||||
|
- Updated the provided openstacksdk version to support new functionality (such as Nova scheduler_hints)
|
||||||
|
- Added the ability to specify a custom cacert for the HashiCorp Vault credential plugin
|
||||||
|
- Fixed a number of bugs related to path lookups for the HashiCorp Vault credential plugin
|
||||||
|
- Fixed a bug which prevented signed SSH certificates from working, including the HashiCorp Vault Signed SSH backend
|
||||||
|
- Fixed a bug which prevented custom logos from displaying on the login page (as a result of a new Content Security Policy in 6.0.0)
|
||||||
|
- Fixed a bug which broke websocket connectivity in Apple Safari (as a result of a new Content Security Policy in 6.0.0)
|
||||||
|
- Fixed a bug on the job output page that occasionally caused the "up" and "down" buttons to not load additional output
|
||||||
|
- Fixed a bug on the job output page that caused quoted task names to display incorrectly
|
||||||
|
|
||||||
|
## 6.0.0 (Jul 1, 2019)
|
||||||
|
|
||||||
|
- Removed support for "Any" notification templates and their API endpoints e.g., /api/v2/job_templates/N/notification_templates/any/ (https://github.com/ansible/awx/issues/4022)
|
||||||
|
- Fixed a bug which prevented credentials from properly being applied to inventory sources (https://github.com/ansible/awx/issues/4059)
|
||||||
|
- Fixed a bug which can cause the task dispatcher to hang indefinitely when external logging support (e.g., Splunk, Logstash) is enabled (https://github.com/ansible/awx/issues/4181)
|
||||||
|
- Fixed a bug which causes slow stdout display when running jobs against smart inventories. (https://github.com/ansible/awx/issues/3106)
|
||||||
|
- Fixed a bug that caused SSL verification flags to fail to be respected for LDAP authentication in certain environments. (https://github.com/ansible/awx/pull/4190)
|
||||||
|
- Added a simple Content Security Policy (https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) to restrict access to third-party resources in the browser. (https://github.com/ansible/awx/pull/4167)
|
||||||
|
- Updated ovirt4 library dependencies to work with newer versions of oVirt (https://github.com/ansible/awx/issues/4138)
|
||||||
|
|
||||||
|
## 5.0.0 (Jun 21, 2019)
|
||||||
|
|
||||||
|
- Bump Django Rest Framework from 3.7.7 to 3.9.4
|
||||||
|
- Bump setuptools / pip dependencies
|
||||||
|
- Fixed bug where Recent Notification list would not appear
|
||||||
|
- Added notifications on job start
|
||||||
|
- Default to Ansible 2.8
|
||||||
|
|
@ -0,0 +1,357 @@
|
||||||
|
# AWX
|
||||||
|
|
||||||
|
Hi there! We're excited to have you as a contributor.
|
||||||
|
|
||||||
|
Have questions about this document or anything not covered here? Come chat with us at `#ansible-awx` on irc.freenode.net, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
|
||||||
|
|
||||||
|
## Table of contents
|
||||||
|
|
||||||
|
* [Things to know prior to submitting code](#things-to-know-prior-to-submitting-code)
|
||||||
|
* [Setting up your development environment](#setting-up-your-development-environment)
|
||||||
|
* [Prerequisites](#prerequisites)
|
||||||
|
* [Docker](#docker)
|
||||||
|
* [Docker compose](#docker-compose)
|
||||||
|
* [Node and npm](#node-and-npm)
|
||||||
|
* [Build the environment](#build-the-environment)
|
||||||
|
* [Fork and clone the AWX repo](#fork-and-clone-the-awx-repo)
|
||||||
|
* [Create local settings](#create-local-settings)
|
||||||
|
* [Build the base image](#build-the-base-image)
|
||||||
|
* [Build the user interface](#build-the-user-interface)
|
||||||
|
* [Running the environment](#running-the-environment)
|
||||||
|
* [Start the containers](#start-the-containers)
|
||||||
|
* [Start from the container shell](#start-from-the-container-shell)
|
||||||
|
* [Post Build Steps](#post-build-steps)
|
||||||
|
* [Start a shell](#start-a-shell)
|
||||||
|
* [Create a superuser](#create-a-superuser)
|
||||||
|
* [Load the data](#load-the-data)
|
||||||
|
* [Building API Documentation](#build-api-documentation)
|
||||||
|
* [Accessing the AWX web interface](#accessing-the-awx-web-interface)
|
||||||
|
* [Purging containers and images](#purging-containers-and-images)
|
||||||
|
* [What should I work on?](#what-should-i-work-on)
|
||||||
|
* [Submitting Pull Requests](#submitting-pull-requests)
|
||||||
|
* [Reporting Issues](#reporting-issues)
|
||||||
|
|
||||||
|
## Things to know prior to submitting code
|
||||||
|
|
||||||
|
- All code submissions are done through pull requests against the `devel` branch.
|
||||||
|
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
|
||||||
|
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
|
||||||
|
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt
|
||||||
|
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
|
||||||
|
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||||
|
|
||||||
|
## Setting up your development environment
|
||||||
|
|
||||||
|
The AWX development environment workflow and toolchain is based on Docker, and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also binds the local source tree into the development container, making it possible to observe and test changes in real time.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
#### Docker
|
||||||
|
|
||||||
|
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
|
||||||
|
|
||||||
|
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
|
||||||
|
respectively.
|
||||||
|
|
||||||
|
For Linux platforms, refer to the following from Docker:
|
||||||
|
|
||||||
|
**Fedora**
|
||||||
|
|
||||||
|
> https://docs.docker.com/engine/installation/linux/docker-ce/fedora/
|
||||||
|
|
||||||
|
**CentOS**
|
||||||
|
|
||||||
|
> https://docs.docker.com/engine/installation/linux/docker-ce/centos/
|
||||||
|
|
||||||
|
**Ubuntu**
|
||||||
|
|
||||||
|
> https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
|
||||||
|
|
||||||
|
**Debian**
|
||||||
|
|
||||||
|
> https://docs.docker.com/engine/installation/linux/docker-ce/debian/
|
||||||
|
|
||||||
|
**Arch**
|
||||||
|
|
||||||
|
> https://wiki.archlinux.org/index.php/Docker
|
||||||
|
|
||||||
|
#### Docker compose
|
||||||
|
|
||||||
|
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ pip3 install docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Frontend Development
|
||||||
|
|
||||||
|
See [the ui development documentation](awx/ui_next/CONTRIBUTING.md).
|
||||||
|
|
||||||
|
|
||||||
|
### Build the environment
|
||||||
|
|
||||||
|
#### Fork and clone the AWX repo
|
||||||
|
|
||||||
|
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/).
|
||||||
|
|
||||||
|
#### Create local settings
|
||||||
|
|
||||||
|
AWX will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file is required for starting the development environment and startup will fail if it's not provided.
|
||||||
|
|
||||||
|
An example is provided. Make a copy of it, and edit as needed (the defaults are usually fine):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ cp awx/settings/local_settings.py.docker_compose awx/settings/local_settings.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Build the base image
|
||||||
|
|
||||||
|
The AWX base container image (defined in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and symbolic links into the development environment that make running the services easy.
|
||||||
|
|
||||||
|
Run the following to build the image:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ make docker-compose-build
|
||||||
|
```
|
||||||
|
|
||||||
|
**NOTE**
|
||||||
|
|
||||||
|
> The image will need to be rebuilt, if the Python requirements or OS dependencies change.
|
||||||
|
|
||||||
|
Once the build completes, you will have a `ansible/awx_devel` image in your local image cache. Use the `docker images` command to view it, as follows:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ docker images
|
||||||
|
|
||||||
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
|
ansible/awx_devel latest ba9ec3e8df74 26 minutes ago 1.42GB
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Build the user interface
|
||||||
|
|
||||||
|
Run the following to build the AWX UI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host) $ make ui-devel
|
||||||
|
```
|
||||||
|
See [the ui development documentation](awx/ui/README.md) for more information on using the frontend development, build, and test tooling.
|
||||||
|
|
||||||
|
### Running the environment
|
||||||
|
|
||||||
|
#### Start the containers
|
||||||
|
|
||||||
|
Start the development containers by running the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ make docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
|
||||||
|
|
||||||
|
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List running containers
|
||||||
|
(host)$ docker ps
|
||||||
|
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
|
||||||
|
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds
|
||||||
|
b66a506d3007 postgres:12 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
|
||||||
|
```
|
||||||
|
**NOTE**
|
||||||
|
|
||||||
|
> The Makefile assumes that the image you built is tagged with your current branch. This allows you to build images for different contexts or branches. When starting the containers, you can choose a specific branch by setting `COMPOSE_TAG=<branch name>` in your environment.
|
||||||
|
|
||||||
|
> For example, you might be working in a feature branch, but you want to run the containers using the `devel` image you built previously. To do that, start the containers using the following command: `$ COMPOSE_TAG=devel make docker-compose`
|
||||||
|
|
||||||
|
##### Wait for migrations to complete
|
||||||
|
|
||||||
|
The first time you start the environment, database migrations need to run in order to build the PostgreSQL database. It will take few moments, but eventually you will see output in your terminal session that looks like the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
awx_1 | Operations to perform:
|
||||||
|
awx_1 | Synchronize unmigrated apps: solo, api, staticfiles, debug_toolbar, messages, channels, django_extensions, ui, rest_framework, polymorphic
|
||||||
|
awx_1 | Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
|
||||||
|
awx_1 | Synchronizing apps without migrations:
|
||||||
|
awx_1 | Creating tables...
|
||||||
|
awx_1 | Running deferred SQL...
|
||||||
|
awx_1 | Installing custom SQL...
|
||||||
|
awx_1 | Running migrations:
|
||||||
|
awx_1 | Rendering model states... DONE
|
||||||
|
awx_1 | Applying contenttypes.0001_initial... OK
|
||||||
|
awx_1 | Applying contenttypes.0002_remove_content_type_name... OK
|
||||||
|
awx_1 | Applying auth.0001_initial... OK
|
||||||
|
awx_1 | Applying auth.0002_alter_permission_name_max_length... OK
|
||||||
|
awx_1 | Applying auth.0003_alter_user_email_max_length... OK
|
||||||
|
awx_1 | Applying auth.0004_alter_user_username_opts... OK
|
||||||
|
awx_1 | Applying auth.0005_alter_user_last_login_null... OK
|
||||||
|
awx_1 | Applying auth.0006_require_contenttypes_0002... OK
|
||||||
|
awx_1 | Applying taggit.0001_initial... OK
|
||||||
|
awx_1 | Applying taggit.0002_auto_20150616_2121... OK
|
||||||
|
awx_1 | Applying main.0001_initial... OK
|
||||||
|
awx_1 | Applying main.0002_squashed_v300_release... OK
|
||||||
|
awx_1 | Applying main.0003_squashed_v300_v303_updates... OK
|
||||||
|
awx_1 | Applying main.0004_squashed_v310_release... OK
|
||||||
|
awx_1 | Applying conf.0001_initial... OK
|
||||||
|
awx_1 | Applying conf.0002_v310_copy_tower_settings... OK
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Once migrations are completed, you can begin using AWX.
|
||||||
|
|
||||||
|
#### Start from the container shell
|
||||||
|
|
||||||
|
Often times you'll want to start the development environment without immediately starting all of the services in the *awx* container, and instead be taken directly to a shell. You can do this with the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ make docker-compose-test
|
||||||
|
```
|
||||||
|
|
||||||
|
Using `docker exec`, this will create a session in the running *awx* container, and place you at a command prompt, where you can run shell commands inside the container.
|
||||||
|
|
||||||
|
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(container)# /usr/bin/bootstrap_development.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes. Once it's done it
|
||||||
|
will drop you back to the shell.
|
||||||
|
|
||||||
|
In order to launch all developer services:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(container)# /usr/bin/launch_awx.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
`launch_awx.sh` also calls `bootstrap_development.sh` so if all you are doing is launching the supervisor to start all services, you don't
|
||||||
|
need to call `bootstrap_development.sh` first.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Post Build Steps
|
||||||
|
|
||||||
|
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.
|
||||||
|
|
||||||
|
##### Start a shell
|
||||||
|
|
||||||
|
To create the admin user, and load demo data, you first need to start a shell session on the *awx* container. In a new terminal session, use the `docker exec` command as follows to start the shell session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ docker exec -it tools_awx_1 bash
|
||||||
|
```
|
||||||
|
This creates a session in the *awx* containers, just as if you were using `ssh`, and allows you execute commands within the running container.
|
||||||
|
|
||||||
|
##### Create an admin user
|
||||||
|
|
||||||
|
Before you can log into AWX, you need to create an admin user. With this user you will be able to create more users, and begin configuring the server. From within the container shell, run the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(container)# awx-manage createsuperuser
|
||||||
|
```
|
||||||
|
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
|
||||||
|
|
||||||
|
##### Load demo data
|
||||||
|
|
||||||
|
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(container)# awx-manage create_preload_data
|
||||||
|
```
|
||||||
|
|
||||||
|
**NOTE**
|
||||||
|
|
||||||
|
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
|
||||||
|
this container, and thus the database, if the database schema changes in an upstream commit.
|
||||||
|
|
||||||
|
##### Building API Documentation
|
||||||
|
|
||||||
|
AWX includes support for building [Swagger/OpenAPI
|
||||||
|
documentation](https://swagger.io). To build the documentation locally, run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(container)/awx_devel$ make swagger
|
||||||
|
```
|
||||||
|
|
||||||
|
This will write a file named `swagger.json` that contains the API specification
|
||||||
|
in OpenAPI format. A variety of online tools are available for translating
|
||||||
|
this data into more consumable formats (such as HTML). http://editor.swagger.io
|
||||||
|
is an example of one such service.
|
||||||
|
|
||||||
|
### Accessing the AWX web interface
|
||||||
|
|
||||||
|
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
|
||||||
|
|
||||||
|
To log in use the admin user and password you created above in [Create an admin user](#create-an-admin-user).
|
||||||
|
|
||||||
|
### Purging containers and images
|
||||||
|
|
||||||
|
When necessary, remove any AWX containers and images by running the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host)$ make docker-clean
|
||||||
|
```
|
||||||
|
|
||||||
|
## What should I work on?
|
||||||
|
|
||||||
|
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
|
||||||
|
|
||||||
|
If it has someone assigned to it then that person is the person responsible for working the enhancement. If you feel like you could contribute then reach out to that person.
|
||||||
|
|
||||||
|
Fixing bugs, adding translations, and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start. For extra information on debugging tools, see [Debugging](https://github.com/ansible/awx/blob/devel/docs/debugging.md).
|
||||||
|
|
||||||
|
**NOTE**
|
||||||
|
|
||||||
|
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
|
||||||
|
|
||||||
|
**NOTE**
|
||||||
|
|
||||||
|
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](./awx/ui/README.md).
|
||||||
|
|
||||||
|
## Submitting Pull Requests
|
||||||
|
|
||||||
|
Fixes and Features for AWX will go through the Github pull request process. Submit your pull request (PR) against the `devel` branch.
|
||||||
|
|
||||||
|
Here are a few things you can do to help the visibility of your change, and increase the likelihood that it will be accepted:
|
||||||
|
|
||||||
|
* No issues when running linters/code checkers
|
||||||
|
* Python: flake8: `(container)/awx_devel$ make flake8`
|
||||||
|
* Javascript: JsHint: `(container)/awx_devel$ make jshint`
|
||||||
|
* No issues from unit tests
|
||||||
|
* Python: py.test: `(container)/awx_devel$ make test`
|
||||||
|
* JavaScript: Jasmine: `(container)/awx_devel$ make ui-test-ci`
|
||||||
|
* Write tests for new functionality, update/add tests for bug fixes
|
||||||
|
* Make the smallest change possible
|
||||||
|
* Write good commit messages. See [How to write a Git commit message](https://chris.beams.io/posts/git-commit/).
|
||||||
|
|
||||||
|
It's generally a good idea to discuss features with us first by engaging us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
|
||||||
|
|
||||||
|
We like to keep our commit history clean, and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase`, rather than
|
||||||
|
`git pull`, and `git rebase`, rather than `git merge`.
|
||||||
|
|
||||||
|
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in good working order, and so we review requests carefully. Please be patient.
|
||||||
|
|
||||||
|
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
|
||||||
|
|
||||||
|
## PR Checks ran by Zuul
|
||||||
|
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
|
||||||
|
|
||||||
|
Zuul runs the following checks that must pass:
|
||||||
|
1) `tox-awx-api-lint`
|
||||||
|
2) `tox-awx-ui-lint`
|
||||||
|
3) `tox-awx-api`
|
||||||
|
4) `tox-awx-ui`
|
||||||
|
5) `tox-awx-swagger`
|
||||||
|
|
||||||
|
Zuul runs the following checks that are non-voting (can not pass but serve to inform PR reviewers):
|
||||||
|
1) `tox-awx-detect-schema-change`
|
||||||
|
This check generates the schema and diffs it against a reference copy of the `devel` version of the schema.
|
||||||
|
Reviewers should inspect the `job-output.txt.gz` related to the check if their is a failure (grep for `diff -u -b` to find beginning of diff).
|
||||||
|
If the schema change is expected and makes sense in relation to the changes made by the PR, then you are good to go!
|
||||||
|
If not, the schema changes should be fixed, but this decision must be enforced by reviewers.
|
||||||
|
|
||||||
|
## Reporting Issues
|
||||||
|
|
||||||
|
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
# Migrating Data Between AWX Installations
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades.
|
||||||
|
|
||||||
|
Users who wish to upgrade modern AWX installations should follow the instructions at:
|
||||||
|
|
||||||
|
https://github.com/ansible/awx/blob/devel/INSTALL.md#upgrading-from-previous-versions
|
||||||
|
|
@ -0,0 +1,45 @@
|
||||||
|
DCO
|
||||||
|
===
|
||||||
|
|
||||||
|
All contributors must use `git commit --signoff` for any
|
||||||
|
commit to be merged, and agree that usage of --signoff constitutes
|
||||||
|
agreement with the terms of DCO 1.1, which appears below:
|
||||||
|
|
||||||
|
```
|
||||||
|
Developer Certificate of Origin
|
||||||
|
Version 1.1
|
||||||
|
|
||||||
|
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||||
|
1 Letterman Drive
|
||||||
|
Suite D4700
|
||||||
|
San Francisco, CA, 94129
|
||||||
|
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies of this
|
||||||
|
license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
Developer's Certificate of Origin 1.1
|
||||||
|
|
||||||
|
By making a contribution to this project, I certify that:
|
||||||
|
|
||||||
|
(a) The contribution was created in whole or in part by me and I
|
||||||
|
have the right to submit it under the open source license
|
||||||
|
indicated in the file; or
|
||||||
|
|
||||||
|
(b) The contribution is based upon previous work that, to the best
|
||||||
|
of my knowledge, is covered under an appropriate open source
|
||||||
|
license and I have the right under that license to submit that
|
||||||
|
work with modifications, whether created in whole or in part
|
||||||
|
by me, under the same open source license (unless I am
|
||||||
|
permitted to submit under a different license), as indicated
|
||||||
|
in the file; or
|
||||||
|
|
||||||
|
(c) The contribution was provided directly to me by some other
|
||||||
|
person who certified (a), (b) or (c) and I have not modified
|
||||||
|
it.
|
||||||
|
|
||||||
|
(d) I understand and agree that this project and the contribution
|
||||||
|
are public and that a record of the contribution (including all
|
||||||
|
personal information I submit with it, including my sign-off) is
|
||||||
|
maintained indefinitely and may be redistributed consistent with
|
||||||
|
this project or the open source license(s) involved.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,670 @@
|
||||||
|
# Installing AWX
|
||||||
|
|
||||||
|
This document provides a guide for installing AWX.
|
||||||
|
|
||||||
|
## Table of contents
|
||||||
|
|
||||||
|
- [Installing AWX](#installing-awx)
|
||||||
|
* [Getting started](#getting-started)
|
||||||
|
+ [Clone the repo](#clone-the-repo)
|
||||||
|
+ [AWX branding](#awx-branding)
|
||||||
|
+ [Prerequisites](#prerequisites)
|
||||||
|
+ [System Requirements](#system-requirements)
|
||||||
|
+ [Choose a deployment platform](#choose-a-deployment-platform)
|
||||||
|
+ [Official vs Building Images](#official-vs-building-images)
|
||||||
|
* [Upgrading from previous versions](#upgrading-from-previous-versions)
|
||||||
|
* [OpenShift](#openshift)
|
||||||
|
+ [Prerequisites](#prerequisites-1)
|
||||||
|
+ [Pre-install steps](#pre-install-steps)
|
||||||
|
- [Deploying to Minishift](#deploying-to-minishift)
|
||||||
|
- [PostgreSQL](#postgresql)
|
||||||
|
+ [Run the installer](#run-the-installer)
|
||||||
|
+ [Post-install](#post-install)
|
||||||
|
+ [Accessing AWX](#accessing-awx)
|
||||||
|
* [Kubernetes](#kubernetes)
|
||||||
|
+ [Prerequisites](#prerequisites-2)
|
||||||
|
+ [Pre-install steps](#pre-install-steps-1)
|
||||||
|
+ [Configuring Helm](#configuring-helm)
|
||||||
|
+ [Run the installer](#run-the-installer-1)
|
||||||
|
+ [Post-install](#post-install-1)
|
||||||
|
+ [Accessing AWX](#accessing-awx-1)
|
||||||
|
+ [SSL Termination](#ssl-termination)
|
||||||
|
* [Docker-Compose](#docker-compose)
|
||||||
|
+ [Prerequisites](#prerequisites-3)
|
||||||
|
+ [Pre-install steps](#pre-install-steps-2)
|
||||||
|
- [Deploying to a remote host](#deploying-to-a-remote-host)
|
||||||
|
- [Inventory variables](#inventory-variables)
|
||||||
|
- [Docker registry](#docker-registry)
|
||||||
|
- [Proxy settings](#proxy-settings)
|
||||||
|
- [PostgreSQL](#postgresql-1)
|
||||||
|
+ [Run the installer](#run-the-installer-2)
|
||||||
|
+ [Post-install](#post-install-2)
|
||||||
|
+ [Accessing AWX](#accessing-awx-2)
|
||||||
|
- [Installing the AWX CLI](#installing-the-awx-cli)
|
||||||
|
* [Building the CLI Documentation](#building-the-cli-documentation)
|
||||||
|
|
||||||
|
|
||||||
|
## Getting started
|
||||||
|
|
||||||
|
### Clone the repo
|
||||||
|
|
||||||
|
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page:
|
||||||
|
|
||||||
|
https://github.com/ansible/awx/releases
|
||||||
|
|
||||||
|
...and clone the latest stable release, e.g.,
|
||||||
|
|
||||||
|
`git clone -b x.y.z https://github.com/ansible/awx.git`
|
||||||
|
|
||||||
|
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
|
||||||
|
|
||||||
|
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
|
||||||
|
|
||||||
|
Once you have a local copy, run the commands in the following sections from the root of the project tree.
|
||||||
|
|
||||||
|
### AWX branding
|
||||||
|
|
||||||
|
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
|
||||||
|
|
||||||
|
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
Before you can run a deployment, you'll need the following installed in your local environment:
|
||||||
|
|
||||||
|
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.8+
|
||||||
|
- [Docker](https://docs.docker.com/engine/installation/)
|
||||||
|
+ A recent version
|
||||||
|
- [docker](https://pypi.org/project/docker/) Python module
|
||||||
|
+ This is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
|
||||||
|
+ We use this module instead of `docker-py` because it is what the `docker-compose` Python module requires.
|
||||||
|
- [community.general.docker_image collection](https://docs.ansible.com/ansible/latest/collections/community/general/docker_image_module.html)
|
||||||
|
+ This is only required if you are using Ansible >= 2.10
|
||||||
|
- [GNU Make](https://www.gnu.org/software/make/)
|
||||||
|
- [Git](https://git-scm.com/) Requires Version 1.8.4+
|
||||||
|
- Python 3.6+
|
||||||
|
- [Node 14.x LTS version](https://nodejs.org/en/download/)
|
||||||
|
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
|
||||||
|
- [NPM 6.x LTS](https://docs.npmjs.com/)
|
||||||
|
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
|
||||||
|
|
||||||
|
### System Requirements
|
||||||
|
|
||||||
|
The system that runs the AWX service will need to satisfy the following requirements
|
||||||
|
|
||||||
|
- At least 4GB of memory
|
||||||
|
- At least 2 cpu cores
|
||||||
|
- At least 20GB of space
|
||||||
|
- Running Docker, Openshift, or Kubernetes
|
||||||
|
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
|
||||||
|
|
||||||
|
### Choose a deployment platform
|
||||||
|
|
||||||
|
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, a Kubernetes cluster, or docker-compose. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
|
||||||
|
|
||||||
|
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
|
||||||
|
|
||||||
|
In the sections below, you'll find deployment details and instructions for each platform:
|
||||||
|
- [OpenShift](#openshift)
|
||||||
|
- [Kubernetes](#kubernetes)
|
||||||
|
- [Docker Compose](#docker-compose).
|
||||||
|
|
||||||
|
### Official vs Building Images
|
||||||
|
|
||||||
|
When installing AWX you have the option of building your own image or using the image provided on DockerHub (see [awx](https://hub.docker.com/r/ansible/awx/))
|
||||||
|
|
||||||
|
This is controlled by the following variables in the `inventory` file
|
||||||
|
|
||||||
|
```
|
||||||
|
dockerhub_base=ansible
|
||||||
|
dockerhub_version=latest
|
||||||
|
```
|
||||||
|
|
||||||
|
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
|
||||||
|
|
||||||
|
*dockerhub_base*
|
||||||
|
|
||||||
|
> The base location on DockerHub where the images are hosted (by default this pulls a container image named `ansible/awx:tag`)
|
||||||
|
|
||||||
|
*dockerhub_version*
|
||||||
|
|
||||||
|
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
|
||||||
|
|
||||||
|
*use_container_for_build*
|
||||||
|
|
||||||
|
> Use a local distribution build container image for building the AWX package. This is helpful if you don't want to bother installing the build-time dependencies as it is taken care of already.
|
||||||
|
|
||||||
|
|
||||||
|
## Upgrading from previous versions
|
||||||
|
|
||||||
|
Upgrading AWX involves rerunning the install playbook. Download a newer release from [https://github.com/ansible/awx/releases](https://github.com/ansible/awx/releases) and re-populate the inventory file with your customized variables.
|
||||||
|
|
||||||
|
For convenience, you can create a file called `vars.yml`:
|
||||||
|
|
||||||
|
```
|
||||||
|
admin_password: 'adminpass'
|
||||||
|
pg_password: 'pgpass'
|
||||||
|
secret_key: 'mysupersecret'
|
||||||
|
```
|
||||||
|
|
||||||
|
And pass it to the installer:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ansible-playbook -i inventory install.yml -e @vars.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
## OpenShift
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
|
||||||
|
|
||||||
|
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.
|
||||||
|
|
||||||
|
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
|
||||||
|
|
||||||
|
The default resource requests per-deployment requires:
|
||||||
|
|
||||||
|
> Memory: 6GB
|
||||||
|
> CPU: 3 cores
|
||||||
|
|
||||||
|
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
|
||||||
|
|
||||||
|
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
|
||||||
|
|
||||||
|
### Pre-install steps
|
||||||
|
|
||||||
|
Before starting the install, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
|
||||||
|
|
||||||
|
*openshift_host*
|
||||||
|
|
||||||
|
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
|
||||||
|
|
||||||
|
|
||||||
|
*openshift_skip_tls_verify*
|
||||||
|
|
||||||
|
> Boolean. Set to True if using self-signed certs.
|
||||||
|
|
||||||
|
*openshift_project*
|
||||||
|
|
||||||
|
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
|
||||||
|
|
||||||
|
*openshift_user*
|
||||||
|
|
||||||
|
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
|
||||||
|
|
||||||
|
*openshift_pg_emptydir*
|
||||||
|
|
||||||
|
> Boolean. Set to True to use an emptyDir volume when deploying the PostgreSQL pod. Note: This should only be used for demo and testing purposes.
|
||||||
|
|
||||||
|
*docker_registry*
|
||||||
|
|
||||||
|
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
|
||||||
|
|
||||||
|
*docker_registry_repository*
|
||||||
|
|
||||||
|
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
|
||||||
|
|
||||||
|
*docker_registry_username*
|
||||||
|
|
||||||
|
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
|
||||||
|
|
||||||
|
#### Deploying to Minishift
|
||||||
|
|
||||||
|
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
|
||||||
|
|
||||||
|
The recommended minimum resources for your Minishift VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ minishift start --cpus=4 --memory=8GB
|
||||||
|
```
|
||||||
|
|
||||||
|
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set DOCKER environment variable to point to the Minishift VM
|
||||||
|
$ eval $(minishift docker-env)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**
|
||||||
|
|
||||||
|
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
|
||||||
|
|
||||||
|
#### PostgreSQL
|
||||||
|
|
||||||
|
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
|
||||||
|
|
||||||
|
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
|
||||||
|
|
||||||
|
### Run the installer
|
||||||
|
|
||||||
|
To start the install, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
|
||||||
|
|
||||||
|
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
|
||||||
|
|
||||||
|
Run the following command (docker_registry_password is optional if using official images):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start the install
|
||||||
|
$ ansible-playbook -i inventory install.yml -e openshift_password=developer -e docker_registry_password=$(oc whoami -t)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-install
|
||||||
|
|
||||||
|
After the playbook run completes, check the status of the deployment by running `oc get pods`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View the running pods
|
||||||
|
$ oc get pods
|
||||||
|
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
awx-3886581826-5mv0l 4/4 Running 0 8s
|
||||||
|
postgresql-1-l85fh 1/1 Running 0 20m
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Follow the awx_task log output
|
||||||
|
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
|
||||||
|
```
|
||||||
|
|
||||||
|
You will see the following indicating that database migrations are running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Using /etc/ansible/ansible.cfg as config file
|
||||||
|
127.0.0.1 | SUCCESS => {
|
||||||
|
"changed": false,
|
||||||
|
"db": "awx"
|
||||||
|
}
|
||||||
|
Operations to perform:
|
||||||
|
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
|
||||||
|
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
|
||||||
|
Synchronizing apps without migrations:
|
||||||
|
Creating tables...
|
||||||
|
Running deferred SQL...
|
||||||
|
Installing custom SQL...
|
||||||
|
Running migrations:
|
||||||
|
Rendering model states... DONE
|
||||||
|
Applying contenttypes.0001_initial... OK
|
||||||
|
Applying contenttypes.0002_remove_content_type_name... OK
|
||||||
|
Applying auth.0001_initial... OK
|
||||||
|
Applying auth.0002_alter_permission_name_max_length... OK
|
||||||
|
Applying auth.0003_alter_user_email_max_length... OK
|
||||||
|
Applying auth.0004_alter_user_username_opts... OK
|
||||||
|
Applying auth.0005_alter_user_last_login_null... OK
|
||||||
|
Applying auth.0006_require_contenttypes_0002... OK
|
||||||
|
Applying taggit.0001_initial... OK
|
||||||
|
Applying taggit.0002_auto_20150616_2121... OK
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
When you see output similar to the following, you'll know that database migrations have completed, and you can access the web interface:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
|
||||||
|
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
|
||||||
|
Type "help", "copyright", "credits" or "license" for more information.
|
||||||
|
(InteractiveConsole)
|
||||||
|
|
||||||
|
>>> <User: admin>
|
||||||
|
>>> Default organization added.
|
||||||
|
Demo Credential, Inventory, and Job Template added.
|
||||||
|
Successfully registered instance awx-3886581826-5mv0l
|
||||||
|
(changed: True)
|
||||||
|
Creating instance group tower
|
||||||
|
Added instance awx-3886581826-5mv0l to tower
|
||||||
|
```
|
||||||
|
|
||||||
|
Once database migrations complete, the web interface will be accessible.
|
||||||
|
|
||||||
|
### Accessing AWX
|
||||||
|
|
||||||
|
The AWX web interface is running in the AWX pod, behind the `awx-web-svc` service. To view the service, and its port value, run the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View available services
|
||||||
|
$ oc get services
|
||||||
|
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
awx-web-svc 172.30.111.74 <nodes> 8052:30083/TCP 37m
|
||||||
|
postgresql 172.30.102.9 <none> 5432/TCP 38m
|
||||||
|
```
|
||||||
|
|
||||||
|
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View available routes
|
||||||
|
$ oc get routes
|
||||||
|
|
||||||
|
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
|
||||||
|
awx-web-svc awx-web-svc-awx.192.168.64.2.nip.io awx-web-svc http edge/Allow None
|
||||||
|
```
|
||||||
|
|
||||||
|
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
|
||||||
|
|
||||||
|
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
|
||||||
|
|
||||||
|
## Kubernetes
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
|
||||||
|
|
||||||
|
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
||||||
|
- [helm](https://helm.sh/docs/intro/quickstart/)
|
||||||
|
|
||||||
|
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
|
||||||
|
|
||||||
|
The default resource requests per-pod requires:
|
||||||
|
|
||||||
|
> Memory: 6GB
|
||||||
|
> CPU: 3 cores
|
||||||
|
|
||||||
|
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
|
||||||
|
|
||||||
|
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
|
||||||
|
|
||||||
|
### Pre-install steps
|
||||||
|
|
||||||
|
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
|
||||||
|
|
||||||
|
*kubernetes_context*
|
||||||
|
|
||||||
|
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
|
||||||
|
|
||||||
|
*kubernetes_namespace*
|
||||||
|
|
||||||
|
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
|
||||||
|
|
||||||
|
*docker_registry_*
|
||||||
|
|
||||||
|
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
|
||||||
|
|
||||||
|
### Configuring Helm
|
||||||
|
|
||||||
|
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://helm.sh/docs/intro/quickstart/](https://helm.sh/docs/intro/quickstart/).
|
||||||
|
|
||||||
|
You do not need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) as Helm does it for you. However, an existing one may be used by setting the `pg_persistence_existingclaim` variable.
|
||||||
|
|
||||||
|
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://helm.sh/docs/topics/rbac/](https://helm.sh/docs/topics/rbac/)
|
||||||
|
|
||||||
|
### Run the installer
|
||||||
|
|
||||||
|
After making changes to the `inventory` file use `ansible-playbook` to begin the install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ansible-playbook -i inventory install.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-install
|
||||||
|
|
||||||
|
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View the running pods, it may take a few minutes for everything to be marked in the Running state
|
||||||
|
$ kubectl get pods --namespace awx
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
awx-2558692395-2r8ss 4/4 Running 0 29s
|
||||||
|
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accessing AWX
|
||||||
|
|
||||||
|
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View available services
|
||||||
|
$ kubectl get svc --namespace awx
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
awx-postgresql ClusterIP 10.7.250.208 <none> 5432/TCP 2m
|
||||||
|
awx-web-svc NodePort 10.7.241.35 <none> 80:30177/TCP 1m
|
||||||
|
```
|
||||||
|
|
||||||
|
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get ing --namespace awx
|
||||||
|
NAME HOSTS ADDRESS PORTS AGE
|
||||||
|
awx-web-svc * 35.227.x.y 80 3m
|
||||||
|
```
|
||||||
|
|
||||||
|
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
|
||||||
|
|
||||||
|
### SSL Termination
|
||||||
|
|
||||||
|
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
|
||||||
|
|
||||||
|
|
||||||
|
## Docker-Compose
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
|
||||||
|
- [docker-compose](https://pypi.org/project/docker-compose/) Python module.
|
||||||
|
+ This also installs the `docker` Python module, which is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
|
||||||
|
- [Docker Compose](https://docs.docker.com/compose/install/).
|
||||||
|
|
||||||
|
### Pre-install steps
|
||||||
|
|
||||||
|
#### Deploying to a remote host
|
||||||
|
|
||||||
|
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible, however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
|
||||||
|
|
||||||
|
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# localhost ansible_connection=local
|
||||||
|
awx-server
|
||||||
|
|
||||||
|
[all:vars]
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
|
||||||
|
|
||||||
|
If you choose to use the official images then the remote host will be the one to pull those images.
|
||||||
|
|
||||||
|
**Note**
|
||||||
|
|
||||||
|
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
|
||||||
|
|
||||||
|
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
|
||||||
|
|
||||||
|
> When deploying to a remote host, the playbook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
|
||||||
|
|
||||||
|
|
||||||
|
#### Inventory variables
|
||||||
|
|
||||||
|
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
|
||||||
|
|
||||||
|
*postgres_data_dir*
|
||||||
|
|
||||||
|
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
|
||||||
|
|
||||||
|
*host_port*
|
||||||
|
|
||||||
|
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. If undefined no port will be exposed. Defaults to *80*.
|
||||||
|
|
||||||
|
*host_port_ssl*
|
||||||
|
|
||||||
|
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container for SSL support. If undefined no port will be exposed. Defaults to *443*, only works if you also set `ssl_certificate` (see below).
|
||||||
|
|
||||||
|
*ssl_certificate*
|
||||||
|
|
||||||
|
> Optionally, provide the path to a file that contains a certificate and its private key. This needs to be a .pem-file
|
||||||
|
|
||||||
|
*docker_compose_dir*
|
||||||
|
|
||||||
|
> When using docker-compose, the `docker-compose.yml` file will be created there (default `~/.awx/awxcompose`).
|
||||||
|
|
||||||
|
*custom_venv_dir*
|
||||||
|
|
||||||
|
> Adds the custom venv environments from the local host to be passed into the containers at install.
|
||||||
|
|
||||||
|
*ca_trust_dir*
|
||||||
|
|
||||||
|
> If you're using a non trusted CA, provide a path where the untrusted Certs are stored on your Host.
|
||||||
|
|
||||||
|
#### Docker registry
|
||||||
|
|
||||||
|
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
|
||||||
|
|
||||||
|
*docker_registry*
|
||||||
|
|
||||||
|
> IP address and port, or URL, for accessing a registry.
|
||||||
|
|
||||||
|
*docker_registry_repository*
|
||||||
|
|
||||||
|
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
|
||||||
|
|
||||||
|
*docker_registry_username*
|
||||||
|
|
||||||
|
> Username of the user that will push images to the registry. Defaults to *developer*.
|
||||||
|
|
||||||
|
**Note**
|
||||||
|
|
||||||
|
> These settings are ignored if using official images
|
||||||
|
|
||||||
|
|
||||||
|
#### Proxy settings
|
||||||
|
|
||||||
|
*http_proxy*
|
||||||
|
|
||||||
|
> IP address and port, or URL, for using an http_proxy.
|
||||||
|
|
||||||
|
*https_proxy*
|
||||||
|
|
||||||
|
> IP address and port, or URL, for using an https_proxy.
|
||||||
|
|
||||||
|
*no_proxy*
|
||||||
|
|
||||||
|
> Exclude IP address or URL from the proxy.
|
||||||
|
|
||||||
|
#### PostgreSQL
|
||||||
|
|
||||||
|
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
|
||||||
|
|
||||||
|
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information.
|
||||||
|
|
||||||
|
### Run the installer
|
||||||
|
|
||||||
|
If you are not pushing images to a Docker registry, start the install by running the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set the working directory to installer
|
||||||
|
$ cd installer
|
||||||
|
|
||||||
|
# Run the Ansible playbook
|
||||||
|
$ ansible-playbook -i inventory install.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set the working directory to installer
|
||||||
|
$ cd installer
|
||||||
|
|
||||||
|
# Run the Ansible playbook
|
||||||
|
$ ansible-playbook -i inventory -e docker_registry_password=password install.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-install
|
||||||
|
|
||||||
|
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
|
||||||
|
|
||||||
|
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
|
||||||
|
|
||||||
|
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Tail the awx_task log
|
||||||
|
$ docker logs -f awx_task
|
||||||
|
```
|
||||||
|
|
||||||
|
You will see output similar to the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Using /etc/ansible/ansible.cfg as config file
|
||||||
|
127.0.0.1 | SUCCESS => {
|
||||||
|
"changed": false,
|
||||||
|
"db": "awx"
|
||||||
|
}
|
||||||
|
Operations to perform:
|
||||||
|
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
|
||||||
|
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
|
||||||
|
Synchronizing apps without migrations:
|
||||||
|
Creating tables...
|
||||||
|
Running deferred SQL...
|
||||||
|
Installing custom SQL...
|
||||||
|
Running migrations:
|
||||||
|
Rendering model states... DONE
|
||||||
|
Applying contenttypes.0001_initial... OK
|
||||||
|
Applying contenttypes.0002_remove_content_type_name... OK
|
||||||
|
Applying auth.0001_initial... OK
|
||||||
|
Applying auth.0002_alter_permission_name_max_length... OK
|
||||||
|
Applying auth.0003_alter_user_email_max_length... OK
|
||||||
|
Applying auth.0004_alter_user_username_opts... OK
|
||||||
|
Applying auth.0005_alter_user_last_login_null... OK
|
||||||
|
Applying auth.0006_require_contenttypes_0002... OK
|
||||||
|
Applying taggit.0001_initial... OK
|
||||||
|
Applying taggit.0002_auto_20150616_2121... OK
|
||||||
|
Applying main.0001_initial... OK
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Once migrations complete, you will see the following log output, indicating that migrations have completed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
|
||||||
|
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
|
||||||
|
Type "help", "copyright", "credits" or "license" for more information.
|
||||||
|
(InteractiveConsole)
|
||||||
|
|
||||||
|
>>> <User: admin>
|
||||||
|
>>> Default organization added.
|
||||||
|
Demo Credential, Inventory, and Job Template added.
|
||||||
|
Successfully registered instance awx
|
||||||
|
(changed: True)
|
||||||
|
Creating instance group tower
|
||||||
|
Added instance awx to tower
|
||||||
|
(changed: True)
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accessing AWX
|
||||||
|
|
||||||
|
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
|
||||||
|
|
||||||
|
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
|
||||||
|
|
||||||
|
|
||||||
|
# Installing the AWX CLI
|
||||||
|
|
||||||
|
`awx` is the official command-line client for AWX. It:
|
||||||
|
|
||||||
|
* Uses naming and structure consistent with the AWX HTTP API
|
||||||
|
* Provides consistent output formats with optional machine-parsable formats
|
||||||
|
* To the extent possible, auto-detects API versions, available endpoints, and
|
||||||
|
feature support across multiple versions of AWX.
|
||||||
|
|
||||||
|
Potential uses include:
|
||||||
|
|
||||||
|
* Configuring and launching jobs/playbooks
|
||||||
|
* Checking on the status and output of job runs
|
||||||
|
* Managing objects like organizations, users, teams, etc...
|
||||||
|
|
||||||
|
The preferred way to install the AWX CLI is through pip directly from PyPI:
|
||||||
|
|
||||||
|
pip3 install awxkit
|
||||||
|
awx --help
|
||||||
|
|
||||||
|
## Building the CLI Documentation
|
||||||
|
|
||||||
|
To build the docs, spin up a real AWX server, `pip3 install sphinx sphinxcontrib-autoprogram`, and run:
|
||||||
|
|
||||||
|
~ cd awxkit/awxkit/cli/docs
|
||||||
|
~ TOWER_HOST=https://awx.example.org TOWER_USERNAME=example TOWER_PASSWORD=secret make clean html
|
||||||
|
~ cd build/html/ && python -m http.server
|
||||||
|
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ..
|
||||||
|
|
@ -0,0 +1,87 @@
|
||||||
|
# Issues
|
||||||
|
|
||||||
|
## Reporting
|
||||||
|
|
||||||
|
Use the GitHub [issue tracker](https://github.com/ansible/awx/issues) for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
|
||||||
|
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
|
||||||
|
|
||||||
|
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [mailing list](https://groups.google.com/forum/#!forum/awx-project) , and the `#ansible-awx` channel on irc.freenode.net to get help.
|
||||||
|
|
||||||
|
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a ["thumbs up"](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment) to give us an indication of the severity of the problem.
|
||||||
|
|
||||||
|
### UI Issues
|
||||||
|
|
||||||
|
When reporting issues for the UI, we also appreciate having screen shots and any error messages from the web browser's console. It's not unusual for browser extensions
|
||||||
|
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
|
||||||
|
|
||||||
|
### API and backend issues
|
||||||
|
|
||||||
|
For the API and backend services, please capture all of the logs that you can from the time the problem occurred.
|
||||||
|
|
||||||
|
## How issues are resolved
|
||||||
|
|
||||||
|
We triage our issues into high, medium, and low, and tag them with the relevant component (e.g. api, ui, installer, etc.). We typically focus on higher priority issues first. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking existing functionality, and negatively impacting a large number of users.
|
||||||
|
|
||||||
|
If your issue isn't considered high priority, then please be patient as it may take some time to get to it.
|
||||||
|
|
||||||
|
|
||||||
|
### Issue states
|
||||||
|
|
||||||
|
`state:needs_triage` This issue has not been looked at by a person yet and still needs to be triaged. This is the initial state for all new issues/pull requests.
|
||||||
|
|
||||||
|
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
|
||||||
|
|
||||||
|
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
|
||||||
|
|
||||||
|
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
|
||||||
|
|
||||||
|
`state:in_progress` The issue is actively being worked on and you should be in contact with who ever is assigned if you are also working on or plan to work on a similar issue.
|
||||||
|
|
||||||
|
`state:in_testing` The issue or pull request is currently being tested.
|
||||||
|
|
||||||
|
|
||||||
|
### AWX Issue Bot (awxbot)
|
||||||
|
We use an issue bot to help us label and organize incoming issues, this bot, awxbot, is a version of [ansible/ansibullbot](https://github.com/ansible/ansibullbot).
|
||||||
|
|
||||||
|
#### Overview
|
||||||
|
|
||||||
|
AWXbot performs many functions:
|
||||||
|
|
||||||
|
* Respond quickly to issues and pull requests.
|
||||||
|
* Identify the maintainers responsible for reviewing pull requests.
|
||||||
|
* Identify issues and pull request types and components (e.g. type:bug, component: api)
|
||||||
|
|
||||||
|
#### For issue submitters
|
||||||
|
|
||||||
|
The bot requires a minimal subset of information from the issue template:
|
||||||
|
|
||||||
|
* issue type
|
||||||
|
* component
|
||||||
|
* summary
|
||||||
|
|
||||||
|
If any of those items are missing your issue will still get the `needs_triage` label, but may end up being responded to slower than issues that have the complete set of information.
|
||||||
|
So please use the template whenever possible.
|
||||||
|
|
||||||
|
Currently you can expect the bot to add common labels such as `state:needs_triage`, `type:bug`, `type:enhancement`, `component:ui`, etc...
|
||||||
|
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
|
||||||
|
|
||||||
|
The `state:needs_triage` label will remain on your issue until a person has looked at it.
|
||||||
|
|
||||||
|
#### For pull request submitters
|
||||||
|
|
||||||
|
The bot requires a minimal subset of information from the pull request template:
|
||||||
|
|
||||||
|
* issue type
|
||||||
|
* component
|
||||||
|
* summary
|
||||||
|
|
||||||
|
If any of those items are missing your pull request will still get the `needs_triage` label, but may end up being responded to slower than other pull requests that have a complete set of information.
|
||||||
|
|
||||||
|
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
|
||||||
|
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
|
||||||
|
|
||||||
|
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
|
||||||
|
|
||||||
|
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
|
||||||
|
The comment will look something like `CC @matburt @wwitzel3 ...`.
|
||||||
|
|
||||||
|
|
@ -0,0 +1,168 @@
|
||||||
|
Apache License
|
||||||
|
==============
|
||||||
|
|
||||||
|
_Version 2.0, January 2004_
|
||||||
|
_<<http://www.apache.org/licenses/>>_
|
||||||
|
|
||||||
|
### Terms and Conditions for use, reproduction, and distribution
|
||||||
|
|
||||||
|
#### 1. Definitions
|
||||||
|
|
||||||
|
“License” shall mean the terms and conditions for use, reproduction, and
|
||||||
|
distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
“Licensor” shall mean the copyright owner or entity authorized by the copyright
|
||||||
|
owner that is granting the License.
|
||||||
|
|
||||||
|
“Legal Entity” shall mean the union of the acting entity and all other entities
|
||||||
|
that control, are controlled by, or are under common control with that entity.
|
||||||
|
For the purposes of this definition, “control” means **(i)** the power, direct or
|
||||||
|
indirect, to cause the direction or management of such entity, whether by
|
||||||
|
contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or **(iii)** beneficial ownership of such entity.
|
||||||
|
|
||||||
|
“You” (or “Your”) shall mean an individual or Legal Entity exercising
|
||||||
|
permissions granted by this License.
|
||||||
|
|
||||||
|
“Source” form shall mean the preferred form for making modifications, including
|
||||||
|
but not limited to software source code, documentation source, and configuration
|
||||||
|
files.
|
||||||
|
|
||||||
|
“Object” form shall mean any form resulting from mechanical transformation or
|
||||||
|
translation of a Source form, including but not limited to compiled object code,
|
||||||
|
generated documentation, and conversions to other media types.
|
||||||
|
|
||||||
|
“Work” shall mean the work of authorship, whether in Source or Object form, made
|
||||||
|
available under the License, as indicated by a copyright notice that is included
|
||||||
|
in or attached to the work (an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
“Derivative Works” shall mean any work, whether in Source or Object form, that
|
||||||
|
is based on (or derived from) the Work and for which the editorial revisions,
|
||||||
|
annotations, elaborations, or other modifications represent, as a whole, an
|
||||||
|
original work of authorship. For the purposes of this License, Derivative Works
|
||||||
|
shall not include works that remain separable from, or merely link (or bind by
|
||||||
|
name) to the interfaces of, the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
“Contribution” shall mean any work of authorship, including the original version
|
||||||
|
of the Work and any modifications or additions to that Work or Derivative Works
|
||||||
|
thereof, that is intentionally submitted to Licensor for inclusion in the Work
|
||||||
|
by the copyright owner or by an individual or Legal Entity authorized to submit
|
||||||
|
on behalf of the copyright owner. For the purposes of this definition,
|
||||||
|
“submitted” means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems, and
|
||||||
|
issue tracking systems that are managed by, or on behalf of, the Licensor for
|
||||||
|
the purpose of discussing and improving the Work, but excluding communication
|
||||||
|
that is conspicuously marked or otherwise designated in writing by the copyright
|
||||||
|
owner as “Not a Contribution.”
|
||||||
|
|
||||||
|
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf
|
||||||
|
of whom a Contribution has been received by Licensor and subsequently
|
||||||
|
incorporated within the Work.
|
||||||
|
|
||||||
|
#### 2. Grant of Copyright License
|
||||||
|
|
||||||
|
Subject to the terms and conditions of this License, each Contributor hereby
|
||||||
|
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
|
||||||
|
irrevocable copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the Work and such
|
||||||
|
Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
#### 3. Grant of Patent License
|
||||||
|
|
||||||
|
Subject to the terms and conditions of this License, each Contributor hereby
|
||||||
|
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
|
||||||
|
irrevocable (except as stated in this section) patent license to make, have
|
||||||
|
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
|
||||||
|
such license applies only to those patent claims licensable by such Contributor
|
||||||
|
that are necessarily infringed by their Contribution(s) alone or by combination
|
||||||
|
of their Contribution(s) with the Work to which such Contribution(s) was
|
||||||
|
submitted. If You institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
|
||||||
|
Contribution incorporated within the Work constitutes direct or contributory
|
||||||
|
patent infringement, then any patent licenses granted to You under this License
|
||||||
|
for that Work shall terminate as of the date such litigation is filed.
|
||||||
|
|
||||||
|
#### 4. Redistribution
|
||||||
|
|
||||||
|
You may reproduce and distribute copies of the Work or Derivative Works thereof
|
||||||
|
in any medium, with or without modifications, and in Source or Object form,
|
||||||
|
provided that You meet the following conditions:
|
||||||
|
|
||||||
|
* **(a)** You must give any other recipients of the Work or Derivative Works a copy of
|
||||||
|
this License; and
|
||||||
|
* **(b)** You must cause any modified files to carry prominent notices stating that You
|
||||||
|
changed the files; and
|
||||||
|
* **(c)** You must retain, in the Source form of any Derivative Works that You distribute,
|
||||||
|
all copyright, patent, trademark, and attribution notices from the Source form
|
||||||
|
of the Work, excluding those notices that do not pertain to any part of the
|
||||||
|
Derivative Works; and
|
||||||
|
* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any
|
||||||
|
Derivative Works that You distribute must include a readable copy of the
|
||||||
|
attribution notices contained within such NOTICE file, excluding those notices
|
||||||
|
that do not pertain to any part of the Derivative Works, in at least one of the
|
||||||
|
following places: within a NOTICE text file distributed as part of the
|
||||||
|
Derivative Works; within the Source form or documentation, if provided along
|
||||||
|
with the Derivative Works; or, within a display generated by the Derivative
|
||||||
|
Works, if and wherever such third-party notices normally appear. The contents of
|
||||||
|
the NOTICE file are for informational purposes only and do not modify the
|
||||||
|
License. You may add Your own attribution notices within Derivative Works that
|
||||||
|
You distribute, alongside or as an addendum to the NOTICE text from the Work,
|
||||||
|
provided that such additional attribution notices cannot be construed as
|
||||||
|
modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and may provide
|
||||||
|
additional or different license terms and conditions for use, reproduction, or
|
||||||
|
distribution of Your modifications, or for any such Derivative Works as a whole,
|
||||||
|
provided Your use, reproduction, and distribution of the Work otherwise complies
|
||||||
|
with the conditions stated in this License.
|
||||||
|
|
||||||
|
#### 5. Submission of Contributions
|
||||||
|
|
||||||
|
Unless You explicitly state otherwise, any Contribution intentionally submitted
|
||||||
|
for inclusion in the Work by You to the Licensor shall be under the terms and
|
||||||
|
conditions of this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify the terms of
|
||||||
|
any separate license agreement you may have executed with Licensor regarding
|
||||||
|
such Contributions.
|
||||||
|
|
||||||
|
#### 6. Trademarks
|
||||||
|
|
||||||
|
This License does not grant permission to use the trade names, trademarks,
|
||||||
|
service marks, or product names of the Licensor, except as required for
|
||||||
|
reasonable and customary use in describing the origin of the Work and
|
||||||
|
reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
#### 7. Disclaimer of Warranty
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, Licensor provides the
|
||||||
|
Work (and each Contributor provides its Contributions) on an “AS IS” BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
|
||||||
|
including, without limitation, any warranties or conditions of TITLE,
|
||||||
|
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
|
||||||
|
solely responsible for determining the appropriateness of using or
|
||||||
|
redistributing the Work and assume any risks associated with Your exercise of
|
||||||
|
permissions under this License.
|
||||||
|
|
||||||
|
#### 8. Limitation of Liability
|
||||||
|
|
||||||
|
In no event and under no legal theory, whether in tort (including negligence),
|
||||||
|
contract, or otherwise, unless required by applicable law (such as deliberate
|
||||||
|
and grossly negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special, incidental,
|
||||||
|
or consequential damages of any character arising as a result of this License or
|
||||||
|
out of the use or inability to use the Work (including but not limited to
|
||||||
|
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
|
||||||
|
any and all other commercial damages or losses), even if such Contributor has
|
||||||
|
been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
#### 9. Accepting Warranty or Additional Liability
|
||||||
|
|
||||||
|
While redistributing the Work or Derivative Works thereof, You may choose to
|
||||||
|
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
|
||||||
|
other liability obligations and/or rights consistent with this License. However,
|
||||||
|
in accepting such obligations, You may act only on Your own behalf and on Your
|
||||||
|
sole responsibility, not on behalf of any other Contributor, and only if You
|
||||||
|
agree to indemnify, defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason of your
|
||||||
|
accepting any such warranty or additional liability.
|
||||||
|
|
@ -0,0 +1,32 @@
|
||||||
|
recursive-include awx *.py
|
||||||
|
recursive-include awx *.po
|
||||||
|
recursive-include awx *.mo
|
||||||
|
recursive-include awx/static *
|
||||||
|
recursive-include awx/templates *.html
|
||||||
|
recursive-include awx/api/templates *.md *.html
|
||||||
|
recursive-include awx/ui_next/build *.html
|
||||||
|
recursive-include awx/ui_next/build *
|
||||||
|
recursive-include awx/playbooks *.yml
|
||||||
|
recursive-include awx/lib/site-packages *
|
||||||
|
recursive-include awx/plugins *.ps1
|
||||||
|
recursive-include requirements *.txt
|
||||||
|
recursive-include requirements *.yml
|
||||||
|
recursive-include config *
|
||||||
|
recursive-include docs/licenses *
|
||||||
|
recursive-exclude awx devonly.py*
|
||||||
|
recursive-exclude awx/api/tests *
|
||||||
|
recursive-exclude awx/main/tests *
|
||||||
|
recursive-exclude awx/ui/client *
|
||||||
|
recursive-exclude awx/settings local_settings.py*
|
||||||
|
include tools/scripts/request_tower_configuration.sh
|
||||||
|
include tools/scripts/request_tower_configuration.ps1
|
||||||
|
include tools/scripts/ansible-tower-service
|
||||||
|
include tools/scripts/failure-event-handler
|
||||||
|
include tools/scripts/awx-python
|
||||||
|
include awx/playbooks/library/mkfifo.py
|
||||||
|
include tools/sosreport/*
|
||||||
|
include VERSION
|
||||||
|
include COPYING
|
||||||
|
include Makefile
|
||||||
|
prune awx/public
|
||||||
|
prune awx/projects
|
||||||
|
|
@ -0,0 +1,628 @@
|
||||||
|
PYTHON ?= python3
|
||||||
|
PYTHON_VERSION = $(shell $(PYTHON) -c "from distutils.sysconfig import get_python_version; print(get_python_version())")
|
||||||
|
SITELIB=$(shell $(PYTHON) -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")
|
||||||
|
OFFICIAL ?= no
|
||||||
|
PACKER ?= packer
|
||||||
|
PACKER_BUILD_OPTS ?= -var 'official=$(OFFICIAL)' -var 'aw_repo_url=$(AW_REPO_URL)'
|
||||||
|
NODE ?= node
|
||||||
|
NPM_BIN ?= npm
|
||||||
|
CHROMIUM_BIN=/tmp/chrome-linux/chrome
|
||||||
|
DEPS_SCRIPT ?= packaging/bundle/deps.py
|
||||||
|
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||||
|
MANAGEMENT_COMMAND ?= awx-manage
|
||||||
|
IMAGE_REPOSITORY_AUTH ?=
|
||||||
|
IMAGE_REPOSITORY_BASE ?= https://gcr.io
|
||||||
|
VERSION := $(shell cat VERSION)
|
||||||
|
PYCURL_SSL_LIBRARY ?= openssl
|
||||||
|
|
||||||
|
# NOTE: This defaults the container image version to the branch that's active
|
||||||
|
COMPOSE_TAG ?= $(GIT_BRANCH)
|
||||||
|
COMPOSE_HOST ?= $(shell hostname)
|
||||||
|
|
||||||
|
VENV_BASE ?= /var/lib/awx/venv/
|
||||||
|
COLLECTION_BASE ?= /var/lib/awx/vendor/awx_ansible_collections
|
||||||
|
SCL_PREFIX ?=
|
||||||
|
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
|
||||||
|
|
||||||
|
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
|
||||||
|
# Python packages to install only from source (not from binary wheels)
|
||||||
|
# Comma separated list
|
||||||
|
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio,pycurl
|
||||||
|
# These should be upgraded in the AWX and Ansible venv before attempting
|
||||||
|
# to install the actual requirements
|
||||||
|
VENV_BOOTSTRAP ?= pip==19.3.1 setuptools==41.6.0
|
||||||
|
|
||||||
|
# Determine appropriate shasum command
|
||||||
|
UNAME_S := $(shell uname -s)
|
||||||
|
ifeq ($(UNAME_S),Linux)
|
||||||
|
SHASUM_BIN ?= sha256sum
|
||||||
|
endif
|
||||||
|
ifeq ($(UNAME_S),Darwin)
|
||||||
|
SHASUM_BIN ?= shasum -a 256
|
||||||
|
endif
|
||||||
|
|
||||||
|
# Get the branch information from git
|
||||||
|
GIT_DATE := $(shell git log -n 1 --format="%ai")
|
||||||
|
DATE := $(shell date -u +%Y%m%d%H%M)
|
||||||
|
|
||||||
|
NAME ?= awx
|
||||||
|
GIT_REMOTE_URL = $(shell git config --get remote.origin.url)
|
||||||
|
|
||||||
|
# TAR build parameters
|
||||||
|
SDIST_TAR_NAME=$(NAME)-$(VERSION)
|
||||||
|
WHEEL_NAME=$(NAME)-$(VERSION)
|
||||||
|
|
||||||
|
SDIST_COMMAND ?= sdist
|
||||||
|
WHEEL_COMMAND ?= bdist_wheel
|
||||||
|
SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz
|
||||||
|
WHEEL_FILE ?= $(WHEEL_NAME)-py2-none-any.whl
|
||||||
|
|
||||||
|
I18N_FLAG_FILE = .i18n_built
|
||||||
|
|
||||||
|
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
|
||||||
|
develop refresh adduser migrate dbchange runserver \
|
||||||
|
receiver test test_unit test_coverage coverage_html \
|
||||||
|
dev_build release_build release_clean sdist \
|
||||||
|
ui-docker-machine ui-docker ui-release ui-devel \
|
||||||
|
ui-test ui-deps ui-test-ci VERSION
|
||||||
|
|
||||||
|
clean-tmp:
|
||||||
|
rm -rf tmp/
|
||||||
|
|
||||||
|
clean-venv:
|
||||||
|
rm -rf venv/
|
||||||
|
|
||||||
|
clean-dist:
|
||||||
|
rm -rf dist
|
||||||
|
|
||||||
|
clean-schema:
|
||||||
|
rm -rf swagger.json
|
||||||
|
rm -rf schema.json
|
||||||
|
rm -rf reference-schema.json
|
||||||
|
|
||||||
|
clean-languages:
|
||||||
|
rm -f $(I18N_FLAG_FILE)
|
||||||
|
find . -type f -regex ".*\.mo$$" -delete
|
||||||
|
|
||||||
|
# Remove temporary build files, compiled Python files.
|
||||||
|
clean: clean-ui clean-api clean-awxkit clean-dist
|
||||||
|
rm -rf awx/public
|
||||||
|
rm -rf awx/lib/site-packages
|
||||||
|
rm -rf awx/job_status
|
||||||
|
rm -rf awx/job_output
|
||||||
|
rm -rf reports
|
||||||
|
rm -rf tmp
|
||||||
|
rm -rf $(I18N_FLAG_FILE)
|
||||||
|
mkdir tmp
|
||||||
|
|
||||||
|
clean-api:
|
||||||
|
rm -rf build $(NAME)-$(VERSION) *.egg-info
|
||||||
|
find . -type f -regex ".*\.py[co]$$" -delete
|
||||||
|
find . -type d -name "__pycache__" -delete
|
||||||
|
rm -f awx/awx_test.sqlite3*
|
||||||
|
rm -rf requirements/vendor
|
||||||
|
rm -rf awx/projects
|
||||||
|
|
||||||
|
clean-awxkit:
|
||||||
|
rm -rf awxkit/*.egg-info awxkit/.tox awxkit/build/*
|
||||||
|
|
||||||
|
# convenience target to assert environment variables are defined
|
||||||
|
guard-%:
|
||||||
|
@if [ "$${$*}" = "" ]; then \
|
||||||
|
echo "The required environment variable '$*' is not set"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
virtualenv: virtualenv_ansible virtualenv_awx
|
||||||
|
|
||||||
|
# virtualenv_* targets do not use --system-site-packages to prevent bugs installing packages
|
||||||
|
# but Ansible venvs are expected to have this, so that must be done after venv creation
|
||||||
|
virtualenv_ansible:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
if [ ! -d "$(VENV_BASE)" ]; then \
|
||||||
|
mkdir $(VENV_BASE); \
|
||||||
|
fi; \
|
||||||
|
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
|
||||||
|
virtualenv -p python $(VENV_BASE)/ansible && \
|
||||||
|
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
|
||||||
|
fi; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
virtualenv_ansible_py3:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
if [ ! -d "$(VENV_BASE)" ]; then \
|
||||||
|
mkdir $(VENV_BASE); \
|
||||||
|
fi; \
|
||||||
|
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
|
||||||
|
virtualenv -p $(PYTHON) $(VENV_BASE)/ansible; \
|
||||||
|
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
|
||||||
|
fi; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
# flit is needed for offline install of certain packages, specifically ptyprocess
|
||||||
|
# it is needed for setup, but not always recognized as a setup dependency
|
||||||
|
# similar to pip, setuptools, and wheel, these are all needed here as a bootstrapping issues
|
||||||
|
virtualenv_awx:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
if [ ! -d "$(VENV_BASE)" ]; then \
|
||||||
|
mkdir $(VENV_BASE); \
|
||||||
|
fi; \
|
||||||
|
if [ ! -d "$(VENV_BASE)/awx" ]; then \
|
||||||
|
virtualenv -p $(PYTHON) $(VENV_BASE)/awx; \
|
||||||
|
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
|
||||||
|
fi; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --ignore-install flag is not used because *.txt files should specify exact versions
|
||||||
|
requirements_ansible: virtualenv_ansible
|
||||||
|
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
|
||||||
|
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) -r /dev/stdin ; \
|
||||||
|
else \
|
||||||
|
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
|
||||||
|
fi
|
||||||
|
$(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt
|
||||||
|
# Same effect as using --system-site-packages flag on venv creation
|
||||||
|
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
|
||||||
|
|
||||||
|
requirements_ansible_py3: virtualenv_ansible_py3
|
||||||
|
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
|
||||||
|
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) -r /dev/stdin ; \
|
||||||
|
else \
|
||||||
|
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
|
||||||
|
fi
|
||||||
|
$(VENV_BASE)/ansible/bin/pip3 uninstall --yes -r requirements/requirements_ansible_uninstall.txt
|
||||||
|
# Same effect as using --system-site-packages flag on venv creation
|
||||||
|
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
|
||||||
|
|
||||||
|
requirements_ansible_dev:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
$(VENV_BASE)/ansible/bin/pip install pytest mock; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install third-party requirements needed for AWX's environment.
|
||||||
|
# this does not use system site packages intentionally
|
||||||
|
requirements_awx: virtualenv_awx
|
||||||
|
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
|
||||||
|
cat requirements/requirements.txt requirements/requirements_local.txt | $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) -r /dev/stdin ; \
|
||||||
|
else \
|
||||||
|
cat requirements/requirements.txt requirements/requirements_git.txt | $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
|
||||||
|
fi
|
||||||
|
$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
|
||||||
|
|
||||||
|
requirements_awx_dev:
|
||||||
|
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
|
||||||
|
|
||||||
|
requirements_collections:
|
||||||
|
mkdir -p $(COLLECTION_BASE)
|
||||||
|
n=0; \
|
||||||
|
until [ "$$n" -ge 5 ]; do \
|
||||||
|
ansible-galaxy collection install -r requirements/collections_requirements.yml -p $(COLLECTION_BASE) && break; \
|
||||||
|
n=$$((n+1)); \
|
||||||
|
done
|
||||||
|
|
||||||
|
requirements: requirements_ansible requirements_awx requirements_collections
|
||||||
|
|
||||||
|
requirements_dev: requirements_awx requirements_ansible_py3 requirements_awx_dev requirements_ansible_dev
|
||||||
|
|
||||||
|
requirements_test: requirements
|
||||||
|
|
||||||
|
# "Install" awx package in development mode.
|
||||||
|
develop:
|
||||||
|
@if [ "$(VIRTUAL_ENV)" ]; then \
|
||||||
|
pip uninstall -y awx; \
|
||||||
|
$(PYTHON) setup.py develop; \
|
||||||
|
else \
|
||||||
|
pip uninstall -y awx; \
|
||||||
|
$(PYTHON) setup.py develop; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
version_file:
|
||||||
|
mkdir -p /var/lib/awx/; \
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
python -c "import awx; print(awx.__version__)" > /var/lib/awx/.awx_version; \
|
||||||
|
|
||||||
|
# Do any one-time init tasks.
|
||||||
|
comma := ,
|
||||||
|
init:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(MANAGEMENT_COMMAND) provision_instance --hostname=$(COMPOSE_HOST); \
|
||||||
|
$(MANAGEMENT_COMMAND) register_queue --queuename=tower --instance_percent=100;\
|
||||||
|
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
|
||||||
|
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
|
||||||
|
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
|
||||||
|
$(MANAGEMENT_COMMAND) generate_isolated_key > /awx_devel/awx/main/isolated/authorized_keys; \
|
||||||
|
fi;
|
||||||
|
|
||||||
|
# Refresh development environment after pulling new code.
|
||||||
|
refresh: clean requirements_dev version_file develop migrate
|
||||||
|
|
||||||
|
# Create Django superuser.
|
||||||
|
adduser:
|
||||||
|
$(MANAGEMENT_COMMAND) createsuperuser
|
||||||
|
|
||||||
|
# Create database tables and apply any new migrations.
|
||||||
|
migrate:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(MANAGEMENT_COMMAND) migrate --noinput
|
||||||
|
|
||||||
|
# Run after making changes to the models to create a new migration.
|
||||||
|
dbchange:
|
||||||
|
$(MANAGEMENT_COMMAND) makemigrations
|
||||||
|
|
||||||
|
supervisor:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
supervisord --pidfile=/tmp/supervisor_pid -n
|
||||||
|
|
||||||
|
collectstatic:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
mkdir -p awx/public/static && $(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
|
||||||
|
|
||||||
|
uwsgi: collectstatic
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/var/lib/awx/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1="exec:supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver"
|
||||||
|
|
||||||
|
daphne:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
|
||||||
|
|
||||||
|
wsbroadcast:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(PYTHON) manage.py run_wsbroadcast
|
||||||
|
|
||||||
|
# Run to start the background task dispatcher for development.
|
||||||
|
dispatcher:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(PYTHON) manage.py run_dispatcher
|
||||||
|
|
||||||
|
|
||||||
|
# Run to start the zeromq callback receiver
|
||||||
|
receiver:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(PYTHON) manage.py run_callback_receiver
|
||||||
|
|
||||||
|
nginx:
|
||||||
|
nginx -g "daemon off;"
|
||||||
|
|
||||||
|
jupyter:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(MANAGEMENT_COMMAND) shell_plus --notebook
|
||||||
|
|
||||||
|
reports:
|
||||||
|
mkdir -p $@
|
||||||
|
|
||||||
|
pep8: reports
|
||||||
|
@(set -o pipefail && $@ | tee reports/$@.report)
|
||||||
|
|
||||||
|
flake8: reports
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
(set -o pipefail && $@ | tee reports/$@.report)
|
||||||
|
|
||||||
|
pyflakes: reports
|
||||||
|
@(set -o pipefail && $@ | tee reports/$@.report)
|
||||||
|
|
||||||
|
pylint: reports
|
||||||
|
@(set -o pipefail && $@ | reports/$@.report)
|
||||||
|
|
||||||
|
genschema: reports
|
||||||
|
$(MAKE) swagger PYTEST_ARGS="--genschema --create-db "
|
||||||
|
mv swagger.json schema.json
|
||||||
|
|
||||||
|
swagger: reports
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
|
||||||
|
|
||||||
|
check: flake8 pep8 # pyflakes pylint
|
||||||
|
|
||||||
|
awx-link:
|
||||||
|
[ -d "/awx_devel/awx.egg-info" ] || python3 /awx_devel/setup.py egg_info_dev
|
||||||
|
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/python$(PYTHON_VERSION)/site-packages/awx.egg-link
|
||||||
|
|
||||||
|
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
|
||||||
|
|
||||||
|
# Run all API unit tests.
|
||||||
|
test:
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider -n auto $(TEST_DIRS)
|
||||||
|
cmp VERSION awxkit/VERSION || "VERSION and awxkit/VERSION *must* match"
|
||||||
|
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
|
||||||
|
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
|
||||||
|
|
||||||
|
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
|
||||||
|
COLLECTION_TEST_TARGET ?=
|
||||||
|
COLLECTION_PACKAGE ?= awx
|
||||||
|
COLLECTION_NAMESPACE ?= awx
|
||||||
|
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
|
||||||
|
|
||||||
|
test_collection:
|
||||||
|
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
|
||||||
|
if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
py.test $(COLLECTION_TEST_DIRS) -v
|
||||||
|
# The python path needs to be modified so that the tests can find Ansible within the container
|
||||||
|
# First we will use anything expility set as PYTHONPATH
|
||||||
|
# Second we will load any libraries out of the virtualenv (if it's unspecified that should be ok because python should not load out of an empty directory)
|
||||||
|
# Finally we will add the system path so that the tests can find the ansible libraries
|
||||||
|
|
||||||
|
flake8_collection:
|
||||||
|
flake8 awx_collection/ # Different settings, in main exclude list
|
||||||
|
|
||||||
|
test_collection_all: test_collection flake8_collection
|
||||||
|
|
||||||
|
# WARNING: symlinking a collection is fundamentally unstable
|
||||||
|
# this is for rapid development iteration with playbooks, do not use with other test targets
|
||||||
|
symlink_collection:
|
||||||
|
rm -rf $(COLLECTION_INSTALL)
|
||||||
|
mkdir -p ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE) # in case it does not exist
|
||||||
|
ln -s $(shell pwd)/awx_collection $(COLLECTION_INSTALL)
|
||||||
|
|
||||||
|
build_collection:
|
||||||
|
ansible-playbook -i localhost, awx_collection/tools/template_galaxy.yml -e collection_package=$(COLLECTION_PACKAGE) -e collection_namespace=$(COLLECTION_NAMESPACE) -e collection_version=$(VERSION) -e '{"awx_template_version":false}'
|
||||||
|
ansible-galaxy collection build awx_collection_build --force --output-path=awx_collection_build
|
||||||
|
|
||||||
|
install_collection: build_collection
|
||||||
|
rm -rf $(COLLECTION_INSTALL)
|
||||||
|
ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(VERSION).tar.gz
|
||||||
|
|
||||||
|
test_collection_sanity: install_collection
|
||||||
|
cd $(COLLECTION_INSTALL) && ansible-test sanity
|
||||||
|
|
||||||
|
test_collection_integration: install_collection
|
||||||
|
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
|
||||||
|
|
||||||
|
test_unit:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit
|
||||||
|
|
||||||
|
# Run all API unit tests with coverage enabled.
|
||||||
|
test_coverage:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
py.test --create-db --cov=awx --cov-report=xml --junitxml=./reports/junit.xml $(TEST_DIRS)
|
||||||
|
|
||||||
|
# Output test coverage as HTML (into htmlcov directory).
|
||||||
|
coverage_html:
|
||||||
|
coverage html
|
||||||
|
|
||||||
|
# Run API unit tests across multiple Python/Django versions with Tox.
|
||||||
|
test_tox:
|
||||||
|
tox -v
|
||||||
|
|
||||||
|
# Make fake data
|
||||||
|
DATA_GEN_PRESET = ""
|
||||||
|
bulk_data:
|
||||||
|
@if [ "$(VENV_BASE)" ]; then \
|
||||||
|
. $(VENV_BASE)/awx/bin/activate; \
|
||||||
|
fi; \
|
||||||
|
$(PYTHON) tools/data_generators/rbac_dummy_data_generator.py --preset=$(DATA_GEN_PRESET)
|
||||||
|
|
||||||
|
# l10n TASKS
|
||||||
|
# --------------------------------------
|
||||||
|
|
||||||
|
# check for UI po files
|
||||||
|
HAVE_PO := $(shell ls awx/ui/po/*.po 2>/dev/null)
|
||||||
|
check-po:
|
||||||
|
ifdef HAVE_PO
|
||||||
|
# Should be 'Language: zh-CN' but not 'Language: zh_CN' in zh_CN.po
|
||||||
|
for po in awx/ui/po/*.po ; do \
|
||||||
|
echo $$po; \
|
||||||
|
mo="awx/ui/po/`basename $$po .po`.mo"; \
|
||||||
|
msgfmt --check --verbose $$po -o $$mo; \
|
||||||
|
if test "$$?" -ne 0 ; then \
|
||||||
|
exit -1; \
|
||||||
|
fi; \
|
||||||
|
rm $$mo; \
|
||||||
|
name=`echo "$$po" | grep '-'`; \
|
||||||
|
if test "x$$name" != x ; then \
|
||||||
|
right_name=`echo $$language | sed -e 's/-/_/'`; \
|
||||||
|
echo "ERROR: WRONG $$name CORRECTION: $$right_name"; \
|
||||||
|
exit -1; \
|
||||||
|
fi; \
|
||||||
|
language=`grep '^"Language:' "$$po" | grep '_'`; \
|
||||||
|
if test "x$$language" != x ; then \
|
||||||
|
right_language=`echo $$language | sed -e 's/_/-/'`; \
|
||||||
|
echo "ERROR: WRONG $$language CORRECTION: $$right_language in $$po"; \
|
||||||
|
exit -1; \
|
||||||
|
fi; \
|
||||||
|
done;
|
||||||
|
else
|
||||||
|
@echo No PO files
|
||||||
|
endif
|
||||||
|
|
||||||
|
|
||||||
|
# UI TASKS
|
||||||
|
# --------------------------------------
|
||||||
|
|
||||||
|
UI_BUILD_FLAG_FILE = awx/ui_next/.ui-built
|
||||||
|
|
||||||
|
clean-ui:
|
||||||
|
rm -rf node_modules
|
||||||
|
rm -rf awx/ui_next/node_modules
|
||||||
|
rm -rf awx/ui_next/build
|
||||||
|
rm -rf awx/ui_next/src/locales/_build
|
||||||
|
rm -rf $(UI_BUILD_FLAG_FILE)
|
||||||
|
git checkout awx/ui_next/src/locales
|
||||||
|
|
||||||
|
awx/ui_next/node_modules:
|
||||||
|
$(NPM_BIN) --prefix awx/ui_next --loglevel warn --ignore-scripts install
|
||||||
|
|
||||||
|
$(UI_BUILD_FLAG_FILE):
|
||||||
|
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings
|
||||||
|
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run compile-strings
|
||||||
|
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run build
|
||||||
|
git checkout awx/ui_next/src/locales
|
||||||
|
mkdir -p awx/public/static/css
|
||||||
|
mkdir -p awx/public/static/js
|
||||||
|
mkdir -p awx/public/static/media
|
||||||
|
cp -r awx/ui_next/build/static/css/* awx/public/static/css
|
||||||
|
cp -r awx/ui_next/build/static/js/* awx/public/static/js
|
||||||
|
cp -r awx/ui_next/build/static/media/* awx/public/static/media
|
||||||
|
touch $@
|
||||||
|
|
||||||
|
ui-release: awx/ui_next/node_modules $(UI_BUILD_FLAG_FILE)
|
||||||
|
|
||||||
|
ui-devel: awx/ui_next/node_modules
|
||||||
|
@$(MAKE) -B $(UI_BUILD_FLAG_FILE)
|
||||||
|
|
||||||
|
ui-zuul-lint-and-test:
|
||||||
|
$(NPM_BIN) --prefix awx/ui_next install
|
||||||
|
$(NPM_BIN) run --prefix awx/ui_next lint
|
||||||
|
$(NPM_BIN) run --prefix awx/ui_next prettier-check
|
||||||
|
$(NPM_BIN) run --prefix awx/ui_next test
|
||||||
|
|
||||||
|
|
||||||
|
# Build a pip-installable package into dist/ with a timestamped version number.
|
||||||
|
dev_build:
|
||||||
|
$(PYTHON) setup.py dev_build
|
||||||
|
|
||||||
|
# Build a pip-installable package into dist/ with the release version number.
|
||||||
|
release_build:
|
||||||
|
$(PYTHON) setup.py release_build
|
||||||
|
|
||||||
|
dist/$(SDIST_TAR_FILE): ui-release VERSION
|
||||||
|
$(PYTHON) setup.py $(SDIST_COMMAND)
|
||||||
|
|
||||||
|
dist/$(WHEEL_FILE): ui-release
|
||||||
|
$(PYTHON) setup.py $(WHEEL_COMMAND)
|
||||||
|
|
||||||
|
sdist: dist/$(SDIST_TAR_FILE)
|
||||||
|
@echo "#############################################"
|
||||||
|
@echo "Artifacts:"
|
||||||
|
@echo dist/$(SDIST_TAR_FILE)
|
||||||
|
@echo "#############################################"
|
||||||
|
|
||||||
|
wheel: dist/$(WHEEL_FILE)
|
||||||
|
@echo "#############################################"
|
||||||
|
@echo "Artifacts:"
|
||||||
|
@echo dist/$(WHEEL_FILE)
|
||||||
|
@echo "#############################################"
|
||||||
|
|
||||||
|
# Build setup bundle tarball
|
||||||
|
setup-bundle-build:
|
||||||
|
mkdir -p $@
|
||||||
|
|
||||||
|
docker-auth:
|
||||||
|
@if [ "$(IMAGE_REPOSITORY_AUTH)" ]; then \
|
||||||
|
echo "$(IMAGE_REPOSITORY_AUTH)" | docker login -u oauth2accesstoken --password-stdin $(IMAGE_REPOSITORY_BASE); \
|
||||||
|
fi;
|
||||||
|
|
||||||
|
# This directory is bind-mounted inside of the development container and
|
||||||
|
# needs to be pre-created for permissions to be set correctly. Otherwise,
|
||||||
|
# Docker will create this directory as root.
|
||||||
|
awx/projects:
|
||||||
|
@mkdir -p $@
|
||||||
|
|
||||||
|
# Docker isolated rampart
|
||||||
|
docker-compose-isolated: awx/projects
|
||||||
|
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
|
||||||
|
|
||||||
|
COMPOSE_UP_OPTS ?=
|
||||||
|
|
||||||
|
# Docker Compose Development environment
|
||||||
|
docker-compose: docker-auth awx/projects
|
||||||
|
CURRENT_UID=$(shell id -u) OS="$(shell docker info | grep 'Operating System')" TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml $(COMPOSE_UP_OPTS) up --no-recreate awx
|
||||||
|
|
||||||
|
docker-compose-cluster: docker-auth awx/projects
|
||||||
|
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
|
||||||
|
|
||||||
|
docker-compose-credential-plugins: docker-auth awx/projects
|
||||||
|
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
|
||||||
|
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
|
||||||
|
|
||||||
|
docker-compose-test: docker-auth awx/projects
|
||||||
|
cd tools && CURRENT_UID=$(shell id -u) OS="$(shell docker info | grep 'Operating System')" TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
|
||||||
|
|
||||||
|
docker-compose-runtest: awx/projects
|
||||||
|
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh
|
||||||
|
|
||||||
|
docker-compose-build-swagger: awx/projects
|
||||||
|
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports --no-deps awx /start_tests.sh swagger
|
||||||
|
|
||||||
|
detect-schema-change: genschema
|
||||||
|
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
|
||||||
|
# Ignore differences in whitespace with -b
|
||||||
|
diff -u -b reference-schema.json schema.json
|
||||||
|
|
||||||
|
docker-compose-clean: awx/projects
|
||||||
|
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
|
||||||
|
|
||||||
|
# Base development image build
|
||||||
|
docker-compose-build:
|
||||||
|
ansible localhost -m template -a "src=installer/roles/image_build/templates/Dockerfile.j2 dest=tools/docker-compose/Dockerfile" -e build_dev=True
|
||||||
|
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile \
|
||||||
|
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
|
||||||
|
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
|
||||||
|
#docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
|
||||||
|
|
||||||
|
# For use when developing on "isolated" AWX deployments
|
||||||
|
docker-compose-isolated-build: docker-compose-build
|
||||||
|
docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile .
|
||||||
|
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
|
||||||
|
#docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
|
||||||
|
|
||||||
|
docker-clean:
|
||||||
|
$(foreach container_id,$(shell docker ps -f name=tools_awx -aq),docker stop $(container_id); docker rm -f $(container_id);)
|
||||||
|
docker images | grep "awx_devel" | awk '{print $$1 ":" $$2}' | xargs docker rmi
|
||||||
|
|
||||||
|
docker-clean-volumes: docker-compose-clean
|
||||||
|
docker volume rm tools_awx_db
|
||||||
|
|
||||||
|
docker-refresh: docker-clean docker-compose
|
||||||
|
|
||||||
|
# Docker Development Environment with Elastic Stack Connected
|
||||||
|
docker-compose-elk: docker-auth awx/projects
|
||||||
|
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
|
||||||
|
|
||||||
|
docker-compose-cluster-elk: docker-auth awx/projects
|
||||||
|
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
|
||||||
|
|
||||||
|
prometheus:
|
||||||
|
docker run -u0 --net=tools_default --link=`docker ps | egrep -o "tools_awx(_run)?_([^ ]+)?"`:awxweb --volume `pwd`/tools/prometheus:/prometheus --name prometheus -d -p 0.0.0.0:9090:9090 prom/prometheus --web.enable-lifecycle --config.file=/prometheus/prometheus.yml
|
||||||
|
|
||||||
|
clean-elk:
|
||||||
|
docker stop tools_kibana_1
|
||||||
|
docker stop tools_logstash_1
|
||||||
|
docker stop tools_elasticsearch_1
|
||||||
|
docker rm tools_logstash_1
|
||||||
|
docker rm tools_elasticsearch_1
|
||||||
|
docker rm tools_kibana_1
|
||||||
|
|
||||||
|
psql-container:
|
||||||
|
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
|
||||||
|
|
||||||
|
VERSION:
|
||||||
|
@echo "awx: $(VERSION)"
|
||||||
|
|
||||||
|
Dockerfile: installer/roles/image_build/templates/Dockerfile.j2
|
||||||
|
ansible localhost -m template -a "src=installer/roles/image_build/templates/Dockerfile.j2 dest=Dockerfile"
|
||||||
|
|
@ -0,0 +1,43 @@
|
||||||
|
[](https://ansible.softwarefactory-project.io/zuul/status)
|
||||||
|
|
||||||
|
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is the upstream project for [Tower](https://www.ansible.com/tower), a commercial derivative of AWX.
|
||||||
|
|
||||||
|
To install AWX, please view the [Install guide](./INSTALL.md).
|
||||||
|
|
||||||
|
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
|
||||||
|
|
||||||
|
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
|
||||||
|
|
||||||
|
The AWX logos and branding assets are covered by [our trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
|
||||||
|
|
||||||
|
Contributing
|
||||||
|
------------
|
||||||
|
|
||||||
|
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
|
||||||
|
- All code submissions are made through pull requests against the `devel` branch.
|
||||||
|
- All contributors must use git commit --signoff for any commit to be merged and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
|
||||||
|
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs. `git merge` for this reason.
|
||||||
|
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net and talk about what you would like to do or add first. This not only helps everyone know what's going on, but it also helps save time and effort if the community decides some changes are needed.
|
||||||
|
|
||||||
|
Reporting Issues
|
||||||
|
----------------
|
||||||
|
|
||||||
|
If you're experiencing a problem that you feel is a bug in AWX or have ideas for improving AWX, we encourage you to open an issue and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
|
||||||
|
|
||||||
|
Code of Conduct
|
||||||
|
---------------
|
||||||
|
|
||||||
|
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||||
|
|
||||||
|
Get Involved
|
||||||
|
------------
|
||||||
|
|
||||||
|
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
|
||||||
|
|
||||||
|
- Join the `#ansible-awx` channel on irc.freenode.net
|
||||||
|
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
|
||||||
|
|
||||||
|
License
|
||||||
|
-------
|
||||||
|
|
||||||
|
[Apache v2](./LICENSE.md)
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
17.1.0
|
||||||
|
|
@ -0,0 +1,154 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
from __future__ import absolute_import, unicode_literals
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from pkg_resources import get_distribution
|
||||||
|
|
||||||
|
__version__ = get_distribution('awx').version
|
||||||
|
__all__ = ['__version__']
|
||||||
|
|
||||||
|
|
||||||
|
# Check for the presence/absence of "devonly" module to determine if running
|
||||||
|
# from a source code checkout or release packaage.
|
||||||
|
try:
|
||||||
|
import awx.devonly # noqa
|
||||||
|
MODE = 'development'
|
||||||
|
except ImportError: # pragma: no cover
|
||||||
|
MODE = 'production'
|
||||||
|
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
try:
|
||||||
|
import django # noqa: F401
|
||||||
|
HAS_DJANGO = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_DJANGO = False
|
||||||
|
else:
|
||||||
|
from django.db.backends.base import schema
|
||||||
|
from django.db.models import indexes
|
||||||
|
from django.db.backends.utils import names_digest
|
||||||
|
|
||||||
|
|
||||||
|
if HAS_DJANGO is True:
|
||||||
|
|
||||||
|
# See upgrade blocker note in requirements/README.md
|
||||||
|
try:
|
||||||
|
names_digest('foo', 'bar', 'baz', length=8)
|
||||||
|
except ValueError:
|
||||||
|
def names_digest(*args, length):
|
||||||
|
"""
|
||||||
|
Generate a 32-bit digest of a set of arguments that can be used to shorten
|
||||||
|
identifying names. Support for use in FIPS environments.
|
||||||
|
"""
|
||||||
|
h = hashlib.md5(usedforsecurity=False)
|
||||||
|
for arg in args:
|
||||||
|
h.update(arg.encode())
|
||||||
|
return h.hexdigest()[:length]
|
||||||
|
|
||||||
|
schema.names_digest = names_digest
|
||||||
|
indexes.names_digest = names_digest
|
||||||
|
|
||||||
|
|
||||||
|
def find_commands(management_dir):
|
||||||
|
# Modified version of function from django/core/management/__init__.py.
|
||||||
|
command_dir = os.path.join(management_dir, 'commands')
|
||||||
|
commands = []
|
||||||
|
try:
|
||||||
|
for f in os.listdir(command_dir):
|
||||||
|
if f.startswith('_'):
|
||||||
|
continue
|
||||||
|
elif f.endswith('.py') and f[:-3] not in commands:
|
||||||
|
commands.append(f[:-3])
|
||||||
|
elif f.endswith('.pyc') and f[:-4] not in commands: # pragma: no cover
|
||||||
|
commands.append(f[:-4])
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
return commands
|
||||||
|
|
||||||
|
|
||||||
|
def oauth2_getattribute(self, attr):
|
||||||
|
# Custom method to override
|
||||||
|
# oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__
|
||||||
|
from django.conf import settings
|
||||||
|
val = None
|
||||||
|
if 'migrate' not in sys.argv:
|
||||||
|
# certain Django OAuth Toolkit migrations actually reference
|
||||||
|
# setting lookups for references to model classes (e.g.,
|
||||||
|
# oauth2_settings.REFRESH_TOKEN_MODEL)
|
||||||
|
# If we're doing an OAuth2 setting lookup *while running* a migration,
|
||||||
|
# don't do our usual "Configure Tower in Tower" database setting lookup
|
||||||
|
val = settings.OAUTH2_PROVIDER.get(attr)
|
||||||
|
if val is None:
|
||||||
|
val = object.__getattribute__(self, attr)
|
||||||
|
return val
|
||||||
|
|
||||||
|
|
||||||
|
def prepare_env():
|
||||||
|
# Update the default settings environment variable based on current mode.
|
||||||
|
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
|
||||||
|
# Hide DeprecationWarnings when running in production. Need to first load
|
||||||
|
# settings to apply our filter after Django's own warnings filter.
|
||||||
|
from django.conf import settings
|
||||||
|
if not settings.DEBUG: # pragma: no cover
|
||||||
|
warnings.simplefilter('ignore', DeprecationWarning)
|
||||||
|
# Monkeypatch Django find_commands to also work with .pyc files.
|
||||||
|
import django.core.management
|
||||||
|
django.core.management.find_commands = find_commands
|
||||||
|
|
||||||
|
# Monkeypatch Oauth2 toolkit settings class to check for settings
|
||||||
|
# in django.conf settings each time, not just once during import
|
||||||
|
import oauth2_provider.settings
|
||||||
|
oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__ = oauth2_getattribute
|
||||||
|
|
||||||
|
# Use the AWX_TEST_DATABASE_* environment variables to specify the test
|
||||||
|
# database settings to use when management command is run as an external
|
||||||
|
# program via unit tests.
|
||||||
|
for opt in ('ENGINE', 'NAME', 'USER', 'PASSWORD', 'HOST', 'PORT'): # pragma: no cover
|
||||||
|
if os.environ.get('AWX_TEST_DATABASE_%s' % opt, None):
|
||||||
|
settings.DATABASES['default'][opt] = os.environ['AWX_TEST_DATABASE_%s' % opt]
|
||||||
|
# Disable capturing all SQL queries in memory when in DEBUG mode.
|
||||||
|
if settings.DEBUG and not getattr(settings, 'SQL_DEBUG', True):
|
||||||
|
from django.db.backends.base.base import BaseDatabaseWrapper
|
||||||
|
from django.db.backends.utils import CursorWrapper
|
||||||
|
BaseDatabaseWrapper.make_debug_cursor = lambda self, cursor: CursorWrapper(cursor, self)
|
||||||
|
|
||||||
|
# Use the default devserver addr/port defined in settings for runserver.
|
||||||
|
default_addr = getattr(settings, 'DEVSERVER_DEFAULT_ADDR', '127.0.0.1')
|
||||||
|
default_port = getattr(settings, 'DEVSERVER_DEFAULT_PORT', 8000)
|
||||||
|
from django.core.management.commands import runserver as core_runserver
|
||||||
|
original_handle = core_runserver.Command.handle
|
||||||
|
|
||||||
|
def handle(self, *args, **options):
|
||||||
|
if not options.get('addrport'):
|
||||||
|
options['addrport'] = '%s:%d' % (default_addr, int(default_port))
|
||||||
|
elif options.get('addrport').isdigit():
|
||||||
|
options['addrport'] = '%s:%d' % (default_addr, int(options['addrport']))
|
||||||
|
return original_handle(self, *args, **options)
|
||||||
|
|
||||||
|
core_runserver.Command.handle = handle
|
||||||
|
|
||||||
|
|
||||||
|
def manage():
|
||||||
|
# Prepare the AWX environment.
|
||||||
|
prepare_env()
|
||||||
|
# Now run the command (or display the version).
|
||||||
|
from django.conf import settings
|
||||||
|
from django.core.management import execute_from_command_line
|
||||||
|
if len(sys.argv) >= 2 and sys.argv[1] in ('version', '--version'): # pragma: no cover
|
||||||
|
sys.stdout.write('%s\n' % __version__)
|
||||||
|
# If running as a user without permission to read settings, display an
|
||||||
|
# error message. Allow --help to still work.
|
||||||
|
elif settings.SECRET_KEY == 'permission-denied':
|
||||||
|
if len(sys.argv) == 1 or len(sys.argv) >= 2 and sys.argv[1] in ('-h', '--help', 'help'):
|
||||||
|
execute_from_command_line(sys.argv)
|
||||||
|
sys.stdout.write('\n')
|
||||||
|
prog = os.path.basename(sys.argv[0])
|
||||||
|
sys.stdout.write('Permission denied: %s must be run as root or awx.\n' % prog)
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
execute_from_command_line(sys.argv)
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
@ -0,0 +1,56 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Python
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.conf import settings
|
||||||
|
from django.utils.encoding import smart_text
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework import authentication
|
||||||
|
|
||||||
|
# Django-OAuth-Toolkit
|
||||||
|
from oauth2_provider.contrib.rest_framework import OAuth2Authentication
|
||||||
|
|
||||||
|
logger = logging.getLogger('awx.api.authentication')
|
||||||
|
|
||||||
|
|
||||||
|
class LoggedBasicAuthentication(authentication.BasicAuthentication):
|
||||||
|
|
||||||
|
def authenticate(self, request):
|
||||||
|
if not settings.AUTH_BASIC_ENABLED:
|
||||||
|
return
|
||||||
|
ret = super(LoggedBasicAuthentication, self).authenticate(request)
|
||||||
|
if ret:
|
||||||
|
username = ret[0].username if ret[0] else '<none>'
|
||||||
|
logger.info(smart_text(u"User {} performed a {} to {} through the API".format(username, request.method, request.path)))
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def authenticate_header(self, request):
|
||||||
|
if not settings.AUTH_BASIC_ENABLED:
|
||||||
|
return
|
||||||
|
return super(LoggedBasicAuthentication, self).authenticate_header(request)
|
||||||
|
|
||||||
|
|
||||||
|
class SessionAuthentication(authentication.SessionAuthentication):
|
||||||
|
|
||||||
|
def authenticate_header(self, request):
|
||||||
|
return 'Session'
|
||||||
|
|
||||||
|
|
||||||
|
class LoggedOAuth2Authentication(OAuth2Authentication):
|
||||||
|
|
||||||
|
def authenticate(self, request):
|
||||||
|
ret = super(LoggedOAuth2Authentication, self).authenticate(request)
|
||||||
|
if ret:
|
||||||
|
user, token = ret
|
||||||
|
username = user.username if user else '<none>'
|
||||||
|
logger.info(smart_text(
|
||||||
|
u"User {} performed a {} to {} through the API using OAuth 2 token {}.".format(
|
||||||
|
username, request.method, request.path, token.pk
|
||||||
|
)
|
||||||
|
))
|
||||||
|
setattr(user, 'oauth_scopes', [x for x in token.scope.split() if x])
|
||||||
|
return ret
|
||||||
|
|
@ -0,0 +1,78 @@
|
||||||
|
# Django
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
|
||||||
|
# AWX
|
||||||
|
from awx.conf import fields, register
|
||||||
|
from awx.api.fields import OAuth2ProviderField
|
||||||
|
from oauth2_provider.settings import oauth2_settings
|
||||||
|
|
||||||
|
|
||||||
|
register(
|
||||||
|
'SESSION_COOKIE_AGE',
|
||||||
|
field_class=fields.IntegerField,
|
||||||
|
min_value=60,
|
||||||
|
max_value=30000000000, # approx 1,000 years, higher values give OverflowError
|
||||||
|
label=_('Idle Time Force Log Out'),
|
||||||
|
help_text=_('Number of seconds that a user is inactive before they will need to login again.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
unit=_('seconds'),
|
||||||
|
)
|
||||||
|
register(
|
||||||
|
'SESSIONS_PER_USER',
|
||||||
|
field_class=fields.IntegerField,
|
||||||
|
min_value=-1,
|
||||||
|
label=_('Maximum number of simultaneous logged in sessions'),
|
||||||
|
help_text=_('Maximum number of simultaneous logged in sessions a user may have. To disable enter -1.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
)
|
||||||
|
register(
|
||||||
|
'AUTH_BASIC_ENABLED',
|
||||||
|
field_class=fields.BooleanField,
|
||||||
|
label=_('Enable HTTP Basic Auth'),
|
||||||
|
help_text=_('Enable HTTP Basic Auth for the API Browser.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
)
|
||||||
|
register(
|
||||||
|
'OAUTH2_PROVIDER',
|
||||||
|
field_class=OAuth2ProviderField,
|
||||||
|
default={'ACCESS_TOKEN_EXPIRE_SECONDS': oauth2_settings.ACCESS_TOKEN_EXPIRE_SECONDS,
|
||||||
|
'AUTHORIZATION_CODE_EXPIRE_SECONDS': oauth2_settings.AUTHORIZATION_CODE_EXPIRE_SECONDS,
|
||||||
|
'REFRESH_TOKEN_EXPIRE_SECONDS': oauth2_settings.REFRESH_TOKEN_EXPIRE_SECONDS},
|
||||||
|
label=_('OAuth 2 Timeout Settings'),
|
||||||
|
help_text=_('Dictionary for customizing OAuth 2 timeouts, available items are '
|
||||||
|
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
|
||||||
|
'of seconds, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
|
||||||
|
'authorization codes in the number of seconds, and `REFRESH_TOKEN_EXPIRE_SECONDS`, '
|
||||||
|
'the duration of refresh tokens, after expired access tokens, '
|
||||||
|
'in the number of seconds.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
unit=_('seconds'),
|
||||||
|
)
|
||||||
|
register(
|
||||||
|
'ALLOW_OAUTH2_FOR_EXTERNAL_USERS',
|
||||||
|
field_class=fields.BooleanField,
|
||||||
|
default=False,
|
||||||
|
label=_('Allow External Users to Create OAuth2 Tokens'),
|
||||||
|
help_text=_('For security reasons, users from external auth providers (LDAP, SAML, '
|
||||||
|
'SSO, Radius, and others) are not allowed to create OAuth2 tokens. '
|
||||||
|
'To change this behavior, enable this setting. Existing tokens will '
|
||||||
|
'not be deleted when this setting is toggled off.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
)
|
||||||
|
register(
|
||||||
|
'LOGIN_REDIRECT_OVERRIDE',
|
||||||
|
field_class=fields.CharField,
|
||||||
|
allow_blank=True,
|
||||||
|
required=False,
|
||||||
|
default='',
|
||||||
|
label=_('Login redirect override URL'),
|
||||||
|
help_text=_('URL to which unauthorized users will be redirected to log in. '
|
||||||
|
'If blank, users will be sent to the Tower login page.'),
|
||||||
|
category=_('Authentication'),
|
||||||
|
category_slug='authentication',
|
||||||
|
)
|
||||||
|
|
@ -0,0 +1,22 @@
|
||||||
|
# Copyright (c) 2018 Ansible by Red Hat
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework.exceptions import ValidationError
|
||||||
|
|
||||||
|
|
||||||
|
class ActiveJobConflict(ValidationError):
|
||||||
|
status_code = 409
|
||||||
|
|
||||||
|
def __init__(self, active_jobs):
|
||||||
|
# During APIException.__init__(), Django Rest Framework
|
||||||
|
# turn everything in self.detail into string by using force_text.
|
||||||
|
# Declare detail afterwards circumvent this behavior.
|
||||||
|
super(ActiveJobConflict, self).__init__()
|
||||||
|
self.detail = {
|
||||||
|
"error": _("Resource is being used by running jobs."),
|
||||||
|
"active_jobs": active_jobs
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,112 @@
|
||||||
|
# Copyright (c) 2016 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
from django.core.exceptions import ObjectDoesNotExist
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework import serializers
|
||||||
|
|
||||||
|
# AWX
|
||||||
|
from awx.conf import fields
|
||||||
|
from awx.main.models import Credential
|
||||||
|
|
||||||
|
__all__ = ['BooleanNullField', 'CharNullField', 'ChoiceNullField', 'VerbatimField']
|
||||||
|
|
||||||
|
|
||||||
|
class NullFieldMixin(object):
|
||||||
|
'''
|
||||||
|
Mixin to prevent shortcutting validation when we want to allow null input,
|
||||||
|
but coerce the resulting value to another type.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def validate_empty_values(self, data):
|
||||||
|
(is_empty_value, data) = super(NullFieldMixin, self).validate_empty_values(data)
|
||||||
|
if is_empty_value and data is None:
|
||||||
|
return (False, data)
|
||||||
|
return (is_empty_value, data)
|
||||||
|
|
||||||
|
|
||||||
|
class BooleanNullField(NullFieldMixin, serializers.NullBooleanField):
|
||||||
|
'''
|
||||||
|
Custom boolean field that allows null and empty string as False values.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def to_internal_value(self, data):
|
||||||
|
return bool(super(BooleanNullField, self).to_internal_value(data))
|
||||||
|
|
||||||
|
|
||||||
|
class CharNullField(NullFieldMixin, serializers.CharField):
|
||||||
|
'''
|
||||||
|
Custom char field that allows null as input and coerces to an empty string.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
kwargs['allow_null'] = True
|
||||||
|
super(CharNullField, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def to_internal_value(self, data):
|
||||||
|
return super(CharNullField, self).to_internal_value(data or u'')
|
||||||
|
|
||||||
|
|
||||||
|
class ChoiceNullField(NullFieldMixin, serializers.ChoiceField):
|
||||||
|
'''
|
||||||
|
Custom choice field that allows null as input and coerces to an empty string.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
kwargs['allow_null'] = True
|
||||||
|
super(ChoiceNullField, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def to_internal_value(self, data):
|
||||||
|
return super(ChoiceNullField, self).to_internal_value(data or u'')
|
||||||
|
|
||||||
|
|
||||||
|
class VerbatimField(serializers.Field):
|
||||||
|
'''
|
||||||
|
Custom field that passes the value through without changes.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def to_internal_value(self, data):
|
||||||
|
return data
|
||||||
|
|
||||||
|
def to_representation(self, value):
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
class OAuth2ProviderField(fields.DictField):
|
||||||
|
|
||||||
|
default_error_messages = {
|
||||||
|
'invalid_key_names': _('Invalid key names: {invalid_key_names}'),
|
||||||
|
}
|
||||||
|
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'}
|
||||||
|
child = fields.IntegerField(min_value=1)
|
||||||
|
|
||||||
|
def to_internal_value(self, data):
|
||||||
|
data = super(OAuth2ProviderField, self).to_internal_value(data)
|
||||||
|
invalid_flags = (set(data.keys()) - self.valid_key_names)
|
||||||
|
if invalid_flags:
|
||||||
|
self.fail('invalid_key_names', invalid_key_names=', '.join(list(invalid_flags)))
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
class DeprecatedCredentialField(serializers.IntegerField):
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
kwargs['allow_null'] = True
|
||||||
|
kwargs['default'] = None
|
||||||
|
kwargs['min_value'] = 1
|
||||||
|
kwargs.setdefault('help_text', 'This resource has been deprecated and will be removed in a future release')
|
||||||
|
super(DeprecatedCredentialField, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def to_internal_value(self, pk):
|
||||||
|
try:
|
||||||
|
pk = int(pk)
|
||||||
|
except ValueError:
|
||||||
|
self.fail('invalid')
|
||||||
|
try:
|
||||||
|
Credential.objects.get(pk=pk)
|
||||||
|
except ObjectDoesNotExist:
|
||||||
|
raise serializers.ValidationError(_('Credential {} does not exist').format(pk))
|
||||||
|
return pk
|
||||||
|
|
@ -0,0 +1,441 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Python
|
||||||
|
import re
|
||||||
|
import json
|
||||||
|
from functools import reduce
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.core.exceptions import FieldError, ValidationError
|
||||||
|
from django.db import models
|
||||||
|
from django.db.models import Q, CharField, IntegerField, BooleanField
|
||||||
|
from django.db.models.fields import FieldDoesNotExist
|
||||||
|
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField, ForeignKey
|
||||||
|
from django.contrib.contenttypes.models import ContentType
|
||||||
|
from django.contrib.contenttypes.fields import GenericForeignKey
|
||||||
|
from django.utils.encoding import force_text
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework.exceptions import ParseError, PermissionDenied
|
||||||
|
from rest_framework.filters import BaseFilterBackend
|
||||||
|
|
||||||
|
# AWX
|
||||||
|
from awx.main.utils import get_type_for_model, to_python_boolean
|
||||||
|
from awx.main.utils.db import get_all_field_names
|
||||||
|
|
||||||
|
|
||||||
|
class TypeFilterBackend(BaseFilterBackend):
|
||||||
|
'''
|
||||||
|
Filter on type field now returned with all objects.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def filter_queryset(self, request, queryset, view):
|
||||||
|
try:
|
||||||
|
types = None
|
||||||
|
for key, value in request.query_params.items():
|
||||||
|
if key == 'type':
|
||||||
|
if ',' in value:
|
||||||
|
types = value.split(',')
|
||||||
|
else:
|
||||||
|
types = (value,)
|
||||||
|
if types:
|
||||||
|
types_map = {}
|
||||||
|
for ct in ContentType.objects.filter(Q(app_label='main') | Q(app_label='auth', model='user')):
|
||||||
|
ct_model = ct.model_class()
|
||||||
|
if not ct_model:
|
||||||
|
continue
|
||||||
|
ct_type = get_type_for_model(ct_model)
|
||||||
|
types_map[ct_type] = ct.pk
|
||||||
|
model = queryset.model
|
||||||
|
model_type = get_type_for_model(model)
|
||||||
|
if 'polymorphic_ctype' in get_all_field_names(model):
|
||||||
|
types_pks = set([v for k, v in types_map.items() if k in types])
|
||||||
|
queryset = queryset.filter(polymorphic_ctype_id__in=types_pks)
|
||||||
|
elif model_type in types:
|
||||||
|
queryset = queryset
|
||||||
|
else:
|
||||||
|
queryset = queryset.none()
|
||||||
|
return queryset
|
||||||
|
except FieldError as e:
|
||||||
|
# Return a 400 for invalid field names.
|
||||||
|
raise ParseError(*e.args)
|
||||||
|
|
||||||
|
|
||||||
|
def get_fields_from_path(model, path):
|
||||||
|
'''
|
||||||
|
Given a Django ORM lookup path (possibly over multiple models)
|
||||||
|
Returns the fields in the line, and also the revised lookup path
|
||||||
|
ex., given
|
||||||
|
model=Organization
|
||||||
|
path='project__timeout'
|
||||||
|
returns tuple of fields traversed as well and a corrected path,
|
||||||
|
for special cases we do substitutions
|
||||||
|
([<IntegerField for timeout>], 'project__timeout')
|
||||||
|
'''
|
||||||
|
# Store of all the fields used to detect repeats
|
||||||
|
field_list = []
|
||||||
|
new_parts = []
|
||||||
|
for name in path.split('__'):
|
||||||
|
if model is None:
|
||||||
|
raise ParseError(_('No related model for field {}.').format(name))
|
||||||
|
# HACK: Make project and inventory source filtering by old field names work for backwards compatibility.
|
||||||
|
if model._meta.object_name in ('Project', 'InventorySource'):
|
||||||
|
name = {
|
||||||
|
'current_update': 'current_job',
|
||||||
|
'last_update': 'last_job',
|
||||||
|
'last_update_failed': 'last_job_failed',
|
||||||
|
'last_updated': 'last_job_run',
|
||||||
|
}.get(name, name)
|
||||||
|
|
||||||
|
if name == 'type' and 'polymorphic_ctype' in get_all_field_names(model):
|
||||||
|
name = 'polymorphic_ctype'
|
||||||
|
new_parts.append('polymorphic_ctype__model')
|
||||||
|
else:
|
||||||
|
new_parts.append(name)
|
||||||
|
|
||||||
|
if name in getattr(model, 'PASSWORD_FIELDS', ()):
|
||||||
|
raise PermissionDenied(_('Filtering on password fields is not allowed.'))
|
||||||
|
elif name == 'pk':
|
||||||
|
field = model._meta.pk
|
||||||
|
else:
|
||||||
|
name_alt = name.replace("_", "")
|
||||||
|
if name_alt in model._meta.fields_map.keys():
|
||||||
|
field = model._meta.fields_map[name_alt]
|
||||||
|
new_parts.pop()
|
||||||
|
new_parts.append(name_alt)
|
||||||
|
else:
|
||||||
|
field = model._meta.get_field(name)
|
||||||
|
if isinstance(field, ForeignObjectRel) and getattr(field.field, '__prevent_search__', False):
|
||||||
|
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
|
||||||
|
elif getattr(field, '__prevent_search__', False):
|
||||||
|
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
|
||||||
|
if field in field_list:
|
||||||
|
# Field traversed twice, could create infinite JOINs, DoSing Tower
|
||||||
|
raise ParseError(_('Loops not allowed in filters, detected on field {}.').format(field.name))
|
||||||
|
field_list.append(field)
|
||||||
|
model = getattr(field, 'related_model', None)
|
||||||
|
|
||||||
|
return field_list, '__'.join(new_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def get_field_from_path(model, path):
|
||||||
|
'''
|
||||||
|
Given a Django ORM lookup path (possibly over multiple models)
|
||||||
|
Returns the last field in the line, and the revised lookup path
|
||||||
|
ex.
|
||||||
|
(<IntegerField for timeout>, 'project__timeout')
|
||||||
|
'''
|
||||||
|
field_list, new_path = get_fields_from_path(model, path)
|
||||||
|
return (field_list[-1], new_path)
|
||||||
|
|
||||||
|
|
||||||
|
class FieldLookupBackend(BaseFilterBackend):
|
||||||
|
'''
|
||||||
|
Filter using field lookups provided via query string parameters.
|
||||||
|
'''
|
||||||
|
|
||||||
|
RESERVED_NAMES = ('page', 'page_size', 'format', 'order', 'order_by',
|
||||||
|
'search', 'type', 'host_filter', 'count_disabled', 'no_truncate')
|
||||||
|
|
||||||
|
SUPPORTED_LOOKUPS = ('exact', 'iexact', 'contains', 'icontains',
|
||||||
|
'startswith', 'istartswith', 'endswith', 'iendswith',
|
||||||
|
'regex', 'iregex', 'gt', 'gte', 'lt', 'lte', 'in',
|
||||||
|
'isnull', 'search')
|
||||||
|
|
||||||
|
# A list of fields that we know can be filtered on without the possiblity
|
||||||
|
# of introducing duplicates
|
||||||
|
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField)
|
||||||
|
|
||||||
|
def get_fields_from_lookup(self, model, lookup):
|
||||||
|
|
||||||
|
if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS:
|
||||||
|
path, suffix = lookup.rsplit('__', 1)
|
||||||
|
else:
|
||||||
|
path = lookup
|
||||||
|
suffix = 'exact'
|
||||||
|
|
||||||
|
if not path:
|
||||||
|
raise ParseError(_('Query string field name not provided.'))
|
||||||
|
|
||||||
|
# FIXME: Could build up a list of models used across relationships, use
|
||||||
|
# those lookups combined with request.user.get_queryset(Model) to make
|
||||||
|
# sure user cannot query using objects he could not view.
|
||||||
|
field_list, new_path = get_fields_from_path(model, path)
|
||||||
|
|
||||||
|
new_lookup = new_path
|
||||||
|
new_lookup = '__'.join([new_path, suffix])
|
||||||
|
return field_list, new_lookup
|
||||||
|
|
||||||
|
def get_field_from_lookup(self, model, lookup):
|
||||||
|
'''Method to match return type of single field, if needed.'''
|
||||||
|
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
|
||||||
|
return (field_list[-1], new_lookup)
|
||||||
|
|
||||||
|
def to_python_related(self, value):
|
||||||
|
value = force_text(value)
|
||||||
|
if value.lower() in ('none', 'null'):
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
return int(value)
|
||||||
|
|
||||||
|
def value_to_python_for_field(self, field, value):
|
||||||
|
if isinstance(field, models.NullBooleanField):
|
||||||
|
return to_python_boolean(value, allow_none=True)
|
||||||
|
elif isinstance(field, models.BooleanField):
|
||||||
|
return to_python_boolean(value)
|
||||||
|
elif isinstance(field, (ForeignObjectRel, ManyToManyField, GenericForeignKey, ForeignKey)):
|
||||||
|
try:
|
||||||
|
return self.to_python_related(value)
|
||||||
|
except ValueError:
|
||||||
|
raise ParseError(_('Invalid {field_name} id: {field_id}').format(
|
||||||
|
field_name=getattr(field, 'name', 'related field'),
|
||||||
|
field_id=value)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return field.to_python(value)
|
||||||
|
|
||||||
|
def value_to_python(self, model, lookup, value):
|
||||||
|
try:
|
||||||
|
lookup.encode("ascii")
|
||||||
|
except UnicodeEncodeError:
|
||||||
|
raise ValueError("%r is not an allowed field name. Must be ascii encodable." % lookup)
|
||||||
|
|
||||||
|
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
|
||||||
|
field = field_list[-1]
|
||||||
|
|
||||||
|
needs_distinct = (not all(isinstance(f, self.NO_DUPLICATES_ALLOW_LIST) for f in field_list))
|
||||||
|
|
||||||
|
# Type names are stored without underscores internally, but are presented and
|
||||||
|
# and serialized over the API containing underscores so we remove `_`
|
||||||
|
# for polymorphic_ctype__model lookups.
|
||||||
|
if new_lookup.startswith('polymorphic_ctype__model'):
|
||||||
|
value = value.replace('_','')
|
||||||
|
elif new_lookup.endswith('__isnull'):
|
||||||
|
value = to_python_boolean(value)
|
||||||
|
elif new_lookup.endswith('__in'):
|
||||||
|
items = []
|
||||||
|
if not value:
|
||||||
|
raise ValueError('cannot provide empty value for __in')
|
||||||
|
for item in value.split(','):
|
||||||
|
items.append(self.value_to_python_for_field(field, item))
|
||||||
|
value = items
|
||||||
|
elif new_lookup.endswith('__regex') or new_lookup.endswith('__iregex'):
|
||||||
|
try:
|
||||||
|
re.compile(value)
|
||||||
|
except re.error as e:
|
||||||
|
raise ValueError(e.args[0])
|
||||||
|
elif new_lookup.endswith('__search'):
|
||||||
|
related_model = getattr(field, 'related_model', None)
|
||||||
|
if not related_model:
|
||||||
|
raise ValueError('%s is not searchable' % new_lookup[:-8])
|
||||||
|
new_lookups = []
|
||||||
|
for rm_field in related_model._meta.fields:
|
||||||
|
if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description', 'playbook'):
|
||||||
|
new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name))
|
||||||
|
return value, new_lookups, needs_distinct
|
||||||
|
else:
|
||||||
|
value = self.value_to_python_for_field(field, value)
|
||||||
|
return value, new_lookup, needs_distinct
|
||||||
|
|
||||||
|
def filter_queryset(self, request, queryset, view):
|
||||||
|
try:
|
||||||
|
# Apply filters specified via query_params. Each entry in the lists
|
||||||
|
# below is (negate, field, value).
|
||||||
|
and_filters = []
|
||||||
|
or_filters = []
|
||||||
|
chain_filters = []
|
||||||
|
role_filters = []
|
||||||
|
search_filters = {}
|
||||||
|
needs_distinct = False
|
||||||
|
# Can only have two values: 'AND', 'OR'
|
||||||
|
# If 'AND' is used, an iterm must satisfy all condition to show up in the results.
|
||||||
|
# If 'OR' is used, an item just need to satisfy one condition to appear in results.
|
||||||
|
search_filter_relation = 'OR'
|
||||||
|
for key, values in request.query_params.lists():
|
||||||
|
if key in self.RESERVED_NAMES:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# HACK: make `created` available via API for the Django User ORM model
|
||||||
|
# so it keep compatiblity with other objects which exposes the `created` attr.
|
||||||
|
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
|
||||||
|
key = key.replace('created', 'date_joined')
|
||||||
|
|
||||||
|
# HACK: Make job event filtering by host name mostly work even
|
||||||
|
# when not capturing job event hosts M2M.
|
||||||
|
if queryset.model._meta.object_name == 'JobEvent' and key.startswith('hosts__name'):
|
||||||
|
key = key.replace('hosts__name', 'or__host__name')
|
||||||
|
or_filters.append((False, 'host__name__isnull', True))
|
||||||
|
|
||||||
|
# Custom __int filter suffix (internal use only).
|
||||||
|
q_int = False
|
||||||
|
if key.endswith('__int'):
|
||||||
|
key = key[:-5]
|
||||||
|
q_int = True
|
||||||
|
|
||||||
|
# RBAC filtering
|
||||||
|
if key == 'role_level':
|
||||||
|
role_filters.append(values[0])
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Search across related objects.
|
||||||
|
if key.endswith('__search'):
|
||||||
|
if values and ',' in values[0]:
|
||||||
|
search_filter_relation = 'AND'
|
||||||
|
values = reduce(lambda list1, list2: list1 + list2, [i.split(',') for i in values])
|
||||||
|
for value in values:
|
||||||
|
search_value, new_keys, _ = self.value_to_python(queryset.model, key, force_text(value))
|
||||||
|
assert isinstance(new_keys, list)
|
||||||
|
search_filters[search_value] = new_keys
|
||||||
|
# by definition, search *only* joins across relations,
|
||||||
|
# so it _always_ needs a .distinct()
|
||||||
|
needs_distinct = True
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Custom chain__ and or__ filters, mutually exclusive (both can
|
||||||
|
# precede not__).
|
||||||
|
q_chain = False
|
||||||
|
q_or = False
|
||||||
|
if key.startswith('chain__'):
|
||||||
|
key = key[7:]
|
||||||
|
q_chain = True
|
||||||
|
elif key.startswith('or__'):
|
||||||
|
key = key[4:]
|
||||||
|
q_or = True
|
||||||
|
|
||||||
|
# Custom not__ filter prefix.
|
||||||
|
q_not = False
|
||||||
|
if key.startswith('not__'):
|
||||||
|
key = key[5:]
|
||||||
|
q_not = True
|
||||||
|
|
||||||
|
# Convert value(s) to python and add to the appropriate list.
|
||||||
|
for value in values:
|
||||||
|
if q_int:
|
||||||
|
value = int(value)
|
||||||
|
value, new_key, distinct = self.value_to_python(queryset.model, key, value)
|
||||||
|
if distinct:
|
||||||
|
needs_distinct = True
|
||||||
|
if q_chain:
|
||||||
|
chain_filters.append((q_not, new_key, value))
|
||||||
|
elif q_or:
|
||||||
|
or_filters.append((q_not, new_key, value))
|
||||||
|
else:
|
||||||
|
and_filters.append((q_not, new_key, value))
|
||||||
|
|
||||||
|
# Now build Q objects for database query filter.
|
||||||
|
if and_filters or or_filters or chain_filters or role_filters or search_filters:
|
||||||
|
args = []
|
||||||
|
for n, k, v in and_filters:
|
||||||
|
if n:
|
||||||
|
args.append(~Q(**{k:v}))
|
||||||
|
else:
|
||||||
|
args.append(Q(**{k:v}))
|
||||||
|
for role_name in role_filters:
|
||||||
|
if not hasattr(queryset.model, 'accessible_pk_qs'):
|
||||||
|
raise ParseError(_(
|
||||||
|
'Cannot apply role_level filter to this list because its model '
|
||||||
|
'does not use roles for access control.'))
|
||||||
|
args.append(
|
||||||
|
Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name))
|
||||||
|
)
|
||||||
|
if or_filters:
|
||||||
|
q = Q()
|
||||||
|
for n,k,v in or_filters:
|
||||||
|
if n:
|
||||||
|
q |= ~Q(**{k:v})
|
||||||
|
else:
|
||||||
|
q |= Q(**{k:v})
|
||||||
|
args.append(q)
|
||||||
|
if search_filters and search_filter_relation == 'OR':
|
||||||
|
q = Q()
|
||||||
|
for term, constrains in search_filters.items():
|
||||||
|
for constrain in constrains:
|
||||||
|
q |= Q(**{constrain: term})
|
||||||
|
args.append(q)
|
||||||
|
elif search_filters and search_filter_relation == 'AND':
|
||||||
|
for term, constrains in search_filters.items():
|
||||||
|
q_chain = Q()
|
||||||
|
for constrain in constrains:
|
||||||
|
q_chain |= Q(**{constrain: term})
|
||||||
|
queryset = queryset.filter(q_chain)
|
||||||
|
for n,k,v in chain_filters:
|
||||||
|
if n:
|
||||||
|
q = ~Q(**{k:v})
|
||||||
|
else:
|
||||||
|
q = Q(**{k:v})
|
||||||
|
queryset = queryset.filter(q)
|
||||||
|
queryset = queryset.filter(*args)
|
||||||
|
if needs_distinct:
|
||||||
|
queryset = queryset.distinct()
|
||||||
|
return queryset
|
||||||
|
except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e:
|
||||||
|
raise ParseError(e.args[0])
|
||||||
|
except ValidationError as e:
|
||||||
|
raise ParseError(json.dumps(e.messages, ensure_ascii=False))
|
||||||
|
|
||||||
|
|
||||||
|
class OrderByBackend(BaseFilterBackend):
|
||||||
|
'''
|
||||||
|
Filter to apply ordering based on query string parameters.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def filter_queryset(self, request, queryset, view):
|
||||||
|
try:
|
||||||
|
order_by = None
|
||||||
|
for key, value in request.query_params.items():
|
||||||
|
if key in ('order', 'order_by'):
|
||||||
|
order_by = value
|
||||||
|
if ',' in value:
|
||||||
|
order_by = value.split(',')
|
||||||
|
else:
|
||||||
|
order_by = (value,)
|
||||||
|
if order_by is None:
|
||||||
|
order_by = self.get_default_ordering(view)
|
||||||
|
if order_by:
|
||||||
|
order_by = self._validate_ordering_fields(queryset.model, order_by)
|
||||||
|
|
||||||
|
# Special handling of the type field for ordering. In this
|
||||||
|
# case, we're not sorting exactly on the type field, but
|
||||||
|
# given the limited number of views with multiple types,
|
||||||
|
# sorting on polymorphic_ctype.model is effectively the same.
|
||||||
|
new_order_by = []
|
||||||
|
if 'polymorphic_ctype' in get_all_field_names(queryset.model):
|
||||||
|
for field in order_by:
|
||||||
|
if field == 'type':
|
||||||
|
new_order_by.append('polymorphic_ctype__model')
|
||||||
|
elif field == '-type':
|
||||||
|
new_order_by.append('-polymorphic_ctype__model')
|
||||||
|
else:
|
||||||
|
new_order_by.append(field)
|
||||||
|
else:
|
||||||
|
for field in order_by:
|
||||||
|
if field not in ('type', '-type'):
|
||||||
|
new_order_by.append(field)
|
||||||
|
queryset = queryset.order_by(*new_order_by)
|
||||||
|
return queryset
|
||||||
|
except FieldError as e:
|
||||||
|
# Return a 400 for invalid field names.
|
||||||
|
raise ParseError(*e.args)
|
||||||
|
|
||||||
|
def get_default_ordering(self, view):
|
||||||
|
ordering = getattr(view, 'ordering', None)
|
||||||
|
if isinstance(ordering, str):
|
||||||
|
return (ordering,)
|
||||||
|
return ordering
|
||||||
|
|
||||||
|
def _validate_ordering_fields(self, model, order_by):
|
||||||
|
for field_name in order_by:
|
||||||
|
# strip off the negation prefix `-` if it exists
|
||||||
|
prefix = ''
|
||||||
|
path = field_name
|
||||||
|
if field_name[0] == '-':
|
||||||
|
prefix = field_name[0]
|
||||||
|
path = field_name[1:]
|
||||||
|
try:
|
||||||
|
field, new_path = get_field_from_path(model, path)
|
||||||
|
new_path = '{}{}'.format(prefix, new_path)
|
||||||
|
except (FieldError, FieldDoesNotExist) as e:
|
||||||
|
raise ParseError(e.args[0])
|
||||||
|
yield new_path
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,305 @@
|
||||||
|
# Copyright (c) 2016 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
from collections import OrderedDict
|
||||||
|
from uuid import UUID
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.core.exceptions import PermissionDenied
|
||||||
|
from django.db.models.fields import PositiveIntegerField, BooleanField
|
||||||
|
from django.db.models.fields.related import ForeignKey
|
||||||
|
from django.http import Http404
|
||||||
|
from django.utils.encoding import force_text, smart_text
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework import exceptions
|
||||||
|
from rest_framework import metadata
|
||||||
|
from rest_framework import serializers
|
||||||
|
from rest_framework.relations import RelatedField, ManyRelatedField
|
||||||
|
from rest_framework.fields import JSONField as DRFJSONField
|
||||||
|
from rest_framework.request import clone_request
|
||||||
|
|
||||||
|
# AWX
|
||||||
|
from awx.api.fields import ChoiceNullField
|
||||||
|
from awx.main.fields import JSONField, ImplicitRoleField
|
||||||
|
from awx.main.models import NotificationTemplate
|
||||||
|
from awx.main.scheduler.kubernetes import PodManager
|
||||||
|
|
||||||
|
|
||||||
|
class Metadata(metadata.SimpleMetadata):
|
||||||
|
|
||||||
|
def get_field_info(self, field):
|
||||||
|
field_info = OrderedDict()
|
||||||
|
field_info['type'] = self.label_lookup[field]
|
||||||
|
field_info['required'] = getattr(field, 'required', False)
|
||||||
|
|
||||||
|
text_attrs = [
|
||||||
|
'read_only', 'label', 'help_text',
|
||||||
|
'min_length', 'max_length',
|
||||||
|
'min_value', 'max_value',
|
||||||
|
'category', 'category_slug',
|
||||||
|
'defined_in_file', 'unit',
|
||||||
|
]
|
||||||
|
|
||||||
|
for attr in text_attrs:
|
||||||
|
value = getattr(field, attr, None)
|
||||||
|
if value is not None and value != '':
|
||||||
|
field_info[attr] = force_text(value, strings_only=True)
|
||||||
|
|
||||||
|
placeholder = getattr(field, 'placeholder', serializers.empty)
|
||||||
|
if placeholder is not serializers.empty:
|
||||||
|
field_info['placeholder'] = placeholder
|
||||||
|
|
||||||
|
serializer = getattr(field, 'parent', None)
|
||||||
|
if serializer and hasattr(serializer, 'Meta') and hasattr(serializer.Meta, 'model'):
|
||||||
|
# Update help text for common fields.
|
||||||
|
field_help_text = {
|
||||||
|
'id': _('Database ID for this {}.'),
|
||||||
|
'name': _('Name of this {}.'),
|
||||||
|
'description': _('Optional description of this {}.'),
|
||||||
|
'type': _('Data type for this {}.'),
|
||||||
|
'url': _('URL for this {}.'),
|
||||||
|
'related': _('Data structure with URLs of related resources.'),
|
||||||
|
'summary_fields': _('Data structure with name/description for related resources. '
|
||||||
|
'The output for some objects may be limited for performance reasons.'),
|
||||||
|
'created': _('Timestamp when this {} was created.'),
|
||||||
|
'modified': _('Timestamp when this {} was last modified.'),
|
||||||
|
}
|
||||||
|
if field.field_name in field_help_text:
|
||||||
|
opts = serializer.Meta.model._meta.concrete_model._meta
|
||||||
|
verbose_name = smart_text(opts.verbose_name)
|
||||||
|
field_info['help_text'] = field_help_text[field.field_name].format(verbose_name)
|
||||||
|
|
||||||
|
if field.field_name == 'type':
|
||||||
|
field_info['filterable'] = True
|
||||||
|
else:
|
||||||
|
for model_field in serializer.Meta.model._meta.fields:
|
||||||
|
if field.field_name == model_field.name:
|
||||||
|
if getattr(model_field, '__accepts_json__', None):
|
||||||
|
field_info['type'] = 'json'
|
||||||
|
field_info['filterable'] = True
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
field_info['filterable'] = False
|
||||||
|
|
||||||
|
# Indicate if a field has a default value.
|
||||||
|
# FIXME: Still isn't showing all default values?
|
||||||
|
try:
|
||||||
|
default = field.get_default()
|
||||||
|
if type(default) is UUID:
|
||||||
|
default = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
|
||||||
|
if field.field_name == 'TOWER_URL_BASE' and default == 'https://towerhost':
|
||||||
|
default = '{}://{}'.format(self.request.scheme, self.request.get_host())
|
||||||
|
field_info['default'] = default
|
||||||
|
except serializers.SkipField:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if getattr(field, 'child', None):
|
||||||
|
field_info['child'] = self.get_field_info(field.child)
|
||||||
|
elif getattr(field, 'fields', None):
|
||||||
|
field_info['children'] = self.get_serializer_info(field)
|
||||||
|
|
||||||
|
if not isinstance(field, (RelatedField, ManyRelatedField)) and hasattr(field, 'choices'):
|
||||||
|
choices = [
|
||||||
|
(choice_value, choice_name) for choice_value, choice_name in field.choices.items()
|
||||||
|
]
|
||||||
|
if not any(choice in ('', None) for choice, _ in choices):
|
||||||
|
if field.allow_blank:
|
||||||
|
choices = [("", "---------")] + choices
|
||||||
|
if field.allow_null and not isinstance(field, ChoiceNullField):
|
||||||
|
choices = [(None, "---------")] + choices
|
||||||
|
field_info['choices'] = choices
|
||||||
|
|
||||||
|
# Indicate if a field is write-only.
|
||||||
|
if getattr(field, 'write_only', False):
|
||||||
|
field_info['write_only'] = True
|
||||||
|
|
||||||
|
# Special handling of notification configuration where the required properties
|
||||||
|
# are conditional on the type selected.
|
||||||
|
if field.field_name == 'notification_configuration':
|
||||||
|
for (notification_type_name, notification_tr_name, notification_type_class) in NotificationTemplate.NOTIFICATION_TYPES:
|
||||||
|
field_info[notification_type_name] = notification_type_class.init_parameters
|
||||||
|
|
||||||
|
# Special handling of notification messages where the required properties
|
||||||
|
# are conditional on the type selected.
|
||||||
|
try:
|
||||||
|
view_model = field.context['view'].model
|
||||||
|
except (AttributeError, KeyError):
|
||||||
|
view_model = None
|
||||||
|
if view_model == NotificationTemplate and field.field_name == 'messages':
|
||||||
|
for (notification_type_name, notification_tr_name, notification_type_class) in NotificationTemplate.NOTIFICATION_TYPES:
|
||||||
|
field_info[notification_type_name] = notification_type_class.default_messages
|
||||||
|
|
||||||
|
|
||||||
|
# Update type of fields returned...
|
||||||
|
model_field = None
|
||||||
|
if serializer and hasattr(serializer, 'Meta') and hasattr(serializer.Meta, 'model'):
|
||||||
|
try:
|
||||||
|
model_field = serializer.Meta.model._meta.get_field(field.field_name)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
if field.field_name == 'type':
|
||||||
|
field_info['type'] = 'choice'
|
||||||
|
elif field.field_name in ('url', 'custom_virtualenv', 'token'):
|
||||||
|
field_info['type'] = 'string'
|
||||||
|
elif field.field_name in ('related', 'summary_fields'):
|
||||||
|
field_info['type'] = 'object'
|
||||||
|
elif isinstance(field, PositiveIntegerField):
|
||||||
|
field_info['type'] = 'integer'
|
||||||
|
elif field.field_name in ('created', 'modified'):
|
||||||
|
field_info['type'] = 'datetime'
|
||||||
|
elif (
|
||||||
|
RelatedField in field.__class__.__bases__ or
|
||||||
|
isinstance(model_field, ForeignKey)
|
||||||
|
):
|
||||||
|
field_info['type'] = 'id'
|
||||||
|
elif (
|
||||||
|
isinstance(field, JSONField) or
|
||||||
|
isinstance(model_field, JSONField) or
|
||||||
|
isinstance(field, DRFJSONField) or
|
||||||
|
isinstance(getattr(field, 'model_field', None), JSONField) or
|
||||||
|
field.field_name == 'credential_passwords'
|
||||||
|
):
|
||||||
|
field_info['type'] = 'json'
|
||||||
|
elif (
|
||||||
|
isinstance(field, ManyRelatedField) and
|
||||||
|
field.field_name == 'credentials'
|
||||||
|
# launch-time credentials
|
||||||
|
):
|
||||||
|
field_info['type'] = 'list_of_ids'
|
||||||
|
elif isinstance(model_field, BooleanField):
|
||||||
|
field_info['type'] = 'boolean'
|
||||||
|
|
||||||
|
return field_info
|
||||||
|
|
||||||
|
def get_serializer_info(self, serializer, method=None):
|
||||||
|
filterer = getattr(serializer, 'filter_field_metadata', lambda fields, method: fields)
|
||||||
|
return filterer(
|
||||||
|
super(Metadata, self).get_serializer_info(serializer),
|
||||||
|
method
|
||||||
|
)
|
||||||
|
|
||||||
|
def determine_actions(self, request, view):
|
||||||
|
# Add field information for GET requests (so field names/labels are
|
||||||
|
# available even when we can't POST/PUT).
|
||||||
|
actions = {}
|
||||||
|
for method in {'GET', 'PUT', 'POST'} & set(view.allowed_methods):
|
||||||
|
view.request = clone_request(request, method)
|
||||||
|
obj = None
|
||||||
|
try:
|
||||||
|
# Test global permissions
|
||||||
|
if hasattr(view, 'check_permissions'):
|
||||||
|
view.check_permissions(view.request)
|
||||||
|
# Test object permissions
|
||||||
|
if method == 'PUT' and hasattr(view, 'get_object'):
|
||||||
|
obj = view.get_object()
|
||||||
|
except (exceptions.APIException, PermissionDenied, Http404):
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
# If user has appropriate permissions for the view, include
|
||||||
|
# appropriate metadata about the fields that should be supplied.
|
||||||
|
serializer = view.get_serializer(instance=obj)
|
||||||
|
actions[method] = self.get_serializer_info(serializer, method=method)
|
||||||
|
finally:
|
||||||
|
view.request = request
|
||||||
|
|
||||||
|
for field, meta in list(actions[method].items()):
|
||||||
|
if not isinstance(meta, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if field == "pod_spec_override":
|
||||||
|
meta['default'] = PodManager().pod_definition
|
||||||
|
|
||||||
|
# Add type choices if available from the serializer.
|
||||||
|
if field == 'type' and hasattr(serializer, 'get_type_choices'):
|
||||||
|
meta['choices'] = serializer.get_type_choices()
|
||||||
|
|
||||||
|
# For GET method, remove meta attributes that aren't relevant
|
||||||
|
# when reading a field and remove write-only fields.
|
||||||
|
if method == 'GET':
|
||||||
|
attrs_to_remove = ('required', 'read_only', 'default', 'min_length', 'max_length', 'placeholder')
|
||||||
|
for attr in attrs_to_remove:
|
||||||
|
meta.pop(attr, None)
|
||||||
|
meta.get('child', {}).pop(attr, None)
|
||||||
|
if meta.pop('write_only', False):
|
||||||
|
actions['GET'].pop(field)
|
||||||
|
|
||||||
|
# For PUT/POST methods, remove read-only fields.
|
||||||
|
if method in ('PUT', 'POST'):
|
||||||
|
# This value should always be False for PUT/POST, so don't
|
||||||
|
# show it (file-based read-only settings can't be updated)
|
||||||
|
meta.pop('defined_in_file', False)
|
||||||
|
|
||||||
|
if meta.pop('read_only', False):
|
||||||
|
if field == 'id' and hasattr(view, 'attach'):
|
||||||
|
continue
|
||||||
|
actions[method].pop(field)
|
||||||
|
|
||||||
|
return actions
|
||||||
|
|
||||||
|
def determine_metadata(self, request, view):
|
||||||
|
# store request on self so we can use it to generate field defaults
|
||||||
|
# (such as TOWER_URL_BASE)
|
||||||
|
self.request = request
|
||||||
|
|
||||||
|
try:
|
||||||
|
setattr(view, '_request', request)
|
||||||
|
metadata = super(Metadata, self).determine_metadata(request, view)
|
||||||
|
finally:
|
||||||
|
delattr(view, '_request')
|
||||||
|
|
||||||
|
# Add type(s) handled by this view/serializer.
|
||||||
|
if hasattr(view, 'get_serializer'):
|
||||||
|
serializer = view.get_serializer()
|
||||||
|
if hasattr(serializer, 'get_types'):
|
||||||
|
metadata['types'] = serializer.get_types()
|
||||||
|
|
||||||
|
# Add search fields if available from the view.
|
||||||
|
if getattr(view, 'search_fields', None):
|
||||||
|
metadata['search_fields'] = view.search_fields
|
||||||
|
|
||||||
|
# Add related search fields if available from the view.
|
||||||
|
if getattr(view, 'related_search_fields', None):
|
||||||
|
metadata['related_search_fields'] = view.related_search_fields
|
||||||
|
|
||||||
|
# include role names in metadata
|
||||||
|
roles = []
|
||||||
|
model = getattr(view, 'model', None)
|
||||||
|
if model:
|
||||||
|
for field in model._meta.get_fields():
|
||||||
|
if type(field) is ImplicitRoleField:
|
||||||
|
roles.append(field.name)
|
||||||
|
if len(roles) > 0:
|
||||||
|
metadata['object_roles'] = roles
|
||||||
|
|
||||||
|
from rest_framework import generics
|
||||||
|
if isinstance(view, generics.ListAPIView) and hasattr(view, 'paginator'):
|
||||||
|
metadata['max_page_size'] = view.paginator.max_page_size
|
||||||
|
|
||||||
|
return metadata
|
||||||
|
|
||||||
|
|
||||||
|
class RoleMetadata(Metadata):
|
||||||
|
def determine_metadata(self, request, view):
|
||||||
|
metadata = super(RoleMetadata, self).determine_metadata(request, view)
|
||||||
|
if 'actions' in metadata:
|
||||||
|
metadata['actions'].pop('POST')
|
||||||
|
metadata['actions']['POST'] = {
|
||||||
|
"id": {"type": "integer", "label": "ID", "help_text": "Database ID for this role."},
|
||||||
|
"disassociate": {"type": "integer", "label": "Disassociate", "help_text": "Provide to remove this role."},
|
||||||
|
}
|
||||||
|
return metadata
|
||||||
|
|
||||||
|
|
||||||
|
class SublistAttachDetatchMetadata(Metadata):
|
||||||
|
|
||||||
|
def determine_actions(self, request, view):
|
||||||
|
actions = super(SublistAttachDetatchMetadata, self).determine_actions(request, view)
|
||||||
|
method = 'POST'
|
||||||
|
if method in actions:
|
||||||
|
for field in list(actions[method].keys()):
|
||||||
|
if field == 'id':
|
||||||
|
continue
|
||||||
|
actions[method].pop(field)
|
||||||
|
return actions
|
||||||
|
|
@ -0,0 +1,15 @@
|
||||||
|
# Copyright (c) 2017 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
from django.conf.urls import url
|
||||||
|
|
||||||
|
from awx.api.views import (
|
||||||
|
MetricsView
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
urls = [
|
||||||
|
url(r'^$', MetricsView.as_view(), name='metrics_view'),
|
||||||
|
]
|
||||||
|
|
||||||
|
__all__ = ['urls']
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Empty models file.
|
||||||
|
|
@ -0,0 +1,69 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from django.conf import settings
|
||||||
|
from django.core.paginator import Paginator as DjangoPaginator
|
||||||
|
from rest_framework import pagination
|
||||||
|
from rest_framework.response import Response
|
||||||
|
from rest_framework.utils.urls import replace_query_param
|
||||||
|
|
||||||
|
|
||||||
|
class DisabledPaginator(DjangoPaginator):
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_pages(self):
|
||||||
|
return 1
|
||||||
|
|
||||||
|
@property
|
||||||
|
def count(self):
|
||||||
|
return 200
|
||||||
|
|
||||||
|
|
||||||
|
class Pagination(pagination.PageNumberPagination):
|
||||||
|
|
||||||
|
page_size_query_param = 'page_size'
|
||||||
|
max_page_size = settings.MAX_PAGE_SIZE
|
||||||
|
count_disabled = False
|
||||||
|
|
||||||
|
def get_next_link(self):
|
||||||
|
if not self.page.has_next():
|
||||||
|
return None
|
||||||
|
url = self.request and self.request.get_full_path() or ''
|
||||||
|
url = url.encode('utf-8')
|
||||||
|
page_number = self.page.next_page_number()
|
||||||
|
return replace_query_param(self.cap_page_size(url), self.page_query_param, page_number)
|
||||||
|
|
||||||
|
def get_previous_link(self):
|
||||||
|
if not self.page.has_previous():
|
||||||
|
return None
|
||||||
|
url = self.request and self.request.get_full_path() or ''
|
||||||
|
url = url.encode('utf-8')
|
||||||
|
page_number = self.page.previous_page_number()
|
||||||
|
return replace_query_param(self.cap_page_size(url), self.page_query_param, page_number)
|
||||||
|
|
||||||
|
def cap_page_size(self, url):
|
||||||
|
if int(self.request.query_params.get(self.page_size_query_param, 0)) > self.max_page_size:
|
||||||
|
url = replace_query_param(url, self.page_size_query_param, self.max_page_size)
|
||||||
|
return url
|
||||||
|
|
||||||
|
def get_html_context(self):
|
||||||
|
context = super().get_html_context()
|
||||||
|
context['page_links'] = [pl._replace(url=self.cap_page_size(pl.url))
|
||||||
|
for pl in context['page_links']]
|
||||||
|
|
||||||
|
return context
|
||||||
|
|
||||||
|
def paginate_queryset(self, queryset, request, **kwargs):
|
||||||
|
self.count_disabled = 'count_disabled' in request.query_params
|
||||||
|
try:
|
||||||
|
if self.count_disabled:
|
||||||
|
self.django_paginator_class = DisabledPaginator
|
||||||
|
return super(Pagination, self).paginate_queryset(queryset, request, **kwargs)
|
||||||
|
finally:
|
||||||
|
self.django_paginator_class = DjangoPaginator
|
||||||
|
|
||||||
|
def get_paginated_response(self, data):
|
||||||
|
if self.count_disabled:
|
||||||
|
return Response({'results': data})
|
||||||
|
return super(Pagination, self).get_paginated_response(data)
|
||||||
|
|
@ -0,0 +1,36 @@
|
||||||
|
# Python
|
||||||
|
from collections import OrderedDict
|
||||||
|
import json
|
||||||
|
|
||||||
|
# Django
|
||||||
|
from django.conf import settings
|
||||||
|
from django.utils.encoding import smart_str
|
||||||
|
from django.utils.translation import ugettext_lazy as _
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework import parsers
|
||||||
|
from rest_framework.exceptions import ParseError
|
||||||
|
|
||||||
|
|
||||||
|
class JSONParser(parsers.JSONParser):
|
||||||
|
"""
|
||||||
|
Parses JSON-serialized data, preserving order of dictionary keys.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def parse(self, stream, media_type=None, parser_context=None):
|
||||||
|
"""
|
||||||
|
Parses the incoming bytestream as JSON and returns the resulting data.
|
||||||
|
"""
|
||||||
|
parser_context = parser_context or {}
|
||||||
|
encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = smart_str(stream.read(), encoding=encoding)
|
||||||
|
if not data:
|
||||||
|
return {}
|
||||||
|
obj = json.loads(data, object_pairs_hook=OrderedDict)
|
||||||
|
if not isinstance(obj, dict) and obj is not None:
|
||||||
|
raise ParseError(_('JSON parse error - not a JSON object'))
|
||||||
|
return obj
|
||||||
|
except ValueError as exc:
|
||||||
|
raise ParseError(_('JSON parse error - %s\nPossible cause: trailing comma.' % str(exc)))
|
||||||
|
|
@ -0,0 +1,256 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
# Python
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework.exceptions import MethodNotAllowed, PermissionDenied
|
||||||
|
from rest_framework import permissions
|
||||||
|
|
||||||
|
# AWX
|
||||||
|
from awx.main.access import check_user_access
|
||||||
|
from awx.main.models import Inventory, UnifiedJob
|
||||||
|
from awx.main.utils import get_object_or_400
|
||||||
|
|
||||||
|
logger = logging.getLogger('awx.api.permissions')
|
||||||
|
|
||||||
|
__all__ = ['ModelAccessPermission', 'JobTemplateCallbackPermission', 'VariableDataPermission',
|
||||||
|
'TaskPermission', 'ProjectUpdatePermission', 'InventoryInventorySourcesUpdatePermission',
|
||||||
|
'UserPermission', 'IsSuperUser', 'InstanceGroupTowerPermission', 'WorkflowApprovalPermission']
|
||||||
|
|
||||||
|
|
||||||
|
class ModelAccessPermission(permissions.BasePermission):
|
||||||
|
'''
|
||||||
|
Default permissions class to check user access based on the model and
|
||||||
|
request method, optionally verifying the request data.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def check_options_permissions(self, request, view, obj=None):
|
||||||
|
return self.check_get_permissions(request, view, obj)
|
||||||
|
|
||||||
|
def check_head_permissions(self, request, view, obj=None):
|
||||||
|
return self.check_get_permissions(request, view, obj)
|
||||||
|
|
||||||
|
def check_get_permissions(self, request, view, obj=None):
|
||||||
|
if hasattr(view, 'parent_model'):
|
||||||
|
parent_obj = view.get_parent_object()
|
||||||
|
if not check_user_access(request.user, view.parent_model, 'read',
|
||||||
|
parent_obj):
|
||||||
|
return False
|
||||||
|
if not obj:
|
||||||
|
return True
|
||||||
|
return check_user_access(request.user, view.model, 'read', obj)
|
||||||
|
|
||||||
|
def check_post_permissions(self, request, view, obj=None):
|
||||||
|
if hasattr(view, 'parent_model'):
|
||||||
|
parent_obj = view.get_parent_object()
|
||||||
|
if not check_user_access(request.user, view.parent_model, 'read',
|
||||||
|
parent_obj):
|
||||||
|
return False
|
||||||
|
if hasattr(view, 'parent_key'):
|
||||||
|
if not check_user_access(request.user, view.model, 'add', {view.parent_key: parent_obj}):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
elif hasattr(view, 'obj_permission_type'):
|
||||||
|
# Generic object-centric view permission check without object not needed
|
||||||
|
if not obj:
|
||||||
|
return True
|
||||||
|
# Permission check that happens when get_object() is called
|
||||||
|
extra_kwargs = {}
|
||||||
|
if view.obj_permission_type == 'admin':
|
||||||
|
extra_kwargs['data'] = {}
|
||||||
|
return check_user_access(
|
||||||
|
request.user, view.model, view.obj_permission_type, obj,
|
||||||
|
**extra_kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
if obj:
|
||||||
|
return True
|
||||||
|
return check_user_access(request.user, view.model, 'add', request.data)
|
||||||
|
|
||||||
|
def check_put_permissions(self, request, view, obj=None):
|
||||||
|
if not obj:
|
||||||
|
# FIXME: For some reason this needs to return True
|
||||||
|
# because it is first called with obj=None?
|
||||||
|
return True
|
||||||
|
return check_user_access(request.user, view.model, 'change', obj,
|
||||||
|
request.data)
|
||||||
|
|
||||||
|
def check_patch_permissions(self, request, view, obj=None):
|
||||||
|
return self.check_put_permissions(request, view, obj)
|
||||||
|
|
||||||
|
def check_delete_permissions(self, request, view, obj=None):
|
||||||
|
if not obj:
|
||||||
|
# FIXME: For some reason this needs to return True
|
||||||
|
# because it is first called with obj=None?
|
||||||
|
return True
|
||||||
|
|
||||||
|
return check_user_access(request.user, view.model, 'delete', obj)
|
||||||
|
|
||||||
|
def check_permissions(self, request, view, obj=None):
|
||||||
|
'''
|
||||||
|
Perform basic permissions checking before delegating to the appropriate
|
||||||
|
method based on the request method.
|
||||||
|
'''
|
||||||
|
|
||||||
|
# Don't allow anonymous users. 401, not 403, hence no raised exception.
|
||||||
|
if not request.user or request.user.is_anonymous:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Always allow superusers
|
||||||
|
if getattr(view, 'always_allow_superuser', True) and request.user.is_superuser:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Check if view supports the request method before checking permission
|
||||||
|
# based on request method.
|
||||||
|
if request.method.upper() not in view.allowed_methods:
|
||||||
|
raise MethodNotAllowed(request.method)
|
||||||
|
|
||||||
|
# Check permissions for the given view and object, based on the request
|
||||||
|
# method used.
|
||||||
|
check_method = getattr(self, 'check_%s_permissions' % request.method.lower(), None)
|
||||||
|
result = check_method and check_method(request, view, obj)
|
||||||
|
if not result:
|
||||||
|
raise PermissionDenied()
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def has_permission(self, request, view, obj=None):
|
||||||
|
logger.debug('has_permission(user=%s method=%s data=%r, %s, %r)',
|
||||||
|
request.user, request.method, request.data,
|
||||||
|
view.__class__.__name__, obj)
|
||||||
|
try:
|
||||||
|
response = self.check_permissions(request, view, obj)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug('has_permission raised %r', e, exc_info=True)
|
||||||
|
raise
|
||||||
|
else:
|
||||||
|
logger.debug('has_permission returned %r', response)
|
||||||
|
return response
|
||||||
|
|
||||||
|
def has_object_permission(self, request, view, obj):
|
||||||
|
return self.has_permission(request, view, obj)
|
||||||
|
|
||||||
|
|
||||||
|
class JobTemplateCallbackPermission(ModelAccessPermission):
|
||||||
|
'''
|
||||||
|
Permission check used by job template callback view for requests from
|
||||||
|
empheral hosts.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def has_permission(self, request, view, obj=None):
|
||||||
|
# If another authentication method was used and it's not a POST, return
|
||||||
|
# True to fall through to the next permission class.
|
||||||
|
if (request.user or request.auth) and request.method.lower() != 'post':
|
||||||
|
return super(JobTemplateCallbackPermission, self).has_permission(request, view, obj)
|
||||||
|
|
||||||
|
# Require method to be POST, host_config_key to be specified and match
|
||||||
|
# the requested job template, and require the job template to be
|
||||||
|
# active in order to proceed.
|
||||||
|
host_config_key = request.data.get('host_config_key', '')
|
||||||
|
if request.method.lower() != 'post':
|
||||||
|
raise PermissionDenied()
|
||||||
|
elif not host_config_key:
|
||||||
|
raise PermissionDenied()
|
||||||
|
elif obj and obj.host_config_key != host_config_key:
|
||||||
|
raise PermissionDenied()
|
||||||
|
else:
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class VariableDataPermission(ModelAccessPermission):
|
||||||
|
|
||||||
|
def check_put_permissions(self, request, view, obj=None):
|
||||||
|
if not obj:
|
||||||
|
return True
|
||||||
|
return check_user_access(request.user, view.model, 'change', obj,
|
||||||
|
dict(variables=request.data))
|
||||||
|
|
||||||
|
|
||||||
|
class TaskPermission(ModelAccessPermission):
|
||||||
|
'''
|
||||||
|
Permission checks used for API callbacks from running a task.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def has_permission(self, request, view, obj=None):
|
||||||
|
# If another authentication method was used other than the one for
|
||||||
|
# callbacks, default to the superclass permissions checking.
|
||||||
|
if request.user or not request.auth:
|
||||||
|
return super(TaskPermission, self).has_permission(request, view, obj)
|
||||||
|
|
||||||
|
# Verify that the ID present in the auth token is for a valid, active
|
||||||
|
# unified job.
|
||||||
|
try:
|
||||||
|
unified_job = UnifiedJob.objects.get(status='running',
|
||||||
|
pk=int(request.auth.split('-')[0]))
|
||||||
|
except (UnifiedJob.DoesNotExist, TypeError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Verify that the request method is one of those allowed for the given
|
||||||
|
# view, also that the job or inventory being accessed matches the auth
|
||||||
|
# token.
|
||||||
|
if view.model == Inventory and request.method.lower() in ('head', 'get'):
|
||||||
|
return bool(not obj or obj.pk == unified_job.inventory_id)
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class WorkflowApprovalPermission(ModelAccessPermission):
|
||||||
|
'''
|
||||||
|
Permission check used by workflow `approval` and `deny` views to determine
|
||||||
|
who has access to approve and deny paused workflow nodes
|
||||||
|
'''
|
||||||
|
|
||||||
|
def check_post_permissions(self, request, view, obj=None):
|
||||||
|
approval = get_object_or_400(view.model, pk=view.kwargs['pk'])
|
||||||
|
return check_user_access(request.user, view.model, 'approve_or_deny', approval)
|
||||||
|
|
||||||
|
|
||||||
|
class ProjectUpdatePermission(ModelAccessPermission):
|
||||||
|
'''
|
||||||
|
Permission check used by ProjectUpdateView to determine who can update projects
|
||||||
|
'''
|
||||||
|
def check_get_permissions(self, request, view, obj=None):
|
||||||
|
project = get_object_or_400(view.model, pk=view.kwargs['pk'])
|
||||||
|
return check_user_access(request.user, view.model, 'read', project)
|
||||||
|
|
||||||
|
def check_post_permissions(self, request, view, obj=None):
|
||||||
|
project = get_object_or_400(view.model, pk=view.kwargs['pk'])
|
||||||
|
return check_user_access(request.user, view.model, 'start', project)
|
||||||
|
|
||||||
|
|
||||||
|
class InventoryInventorySourcesUpdatePermission(ModelAccessPermission):
|
||||||
|
def check_post_permissions(self, request, view, obj=None):
|
||||||
|
inventory = get_object_or_400(view.model, pk=view.kwargs['pk'])
|
||||||
|
return check_user_access(request.user, view.model, 'update', inventory)
|
||||||
|
|
||||||
|
|
||||||
|
class UserPermission(ModelAccessPermission):
|
||||||
|
def check_post_permissions(self, request, view, obj=None):
|
||||||
|
if not request.data:
|
||||||
|
return request.user.admin_of_organizations.exists()
|
||||||
|
elif request.user.is_superuser:
|
||||||
|
return True
|
||||||
|
raise PermissionDenied()
|
||||||
|
|
||||||
|
|
||||||
|
class IsSuperUser(permissions.BasePermission):
|
||||||
|
"""
|
||||||
|
Allows access only to admin users.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def has_permission(self, request, view):
|
||||||
|
return request.user and request.user.is_superuser
|
||||||
|
|
||||||
|
|
||||||
|
class InstanceGroupTowerPermission(ModelAccessPermission):
|
||||||
|
def has_object_permission(self, request, view, obj):
|
||||||
|
if request.method == 'DELETE' and obj.name == "tower":
|
||||||
|
return False
|
||||||
|
return super(InstanceGroupTowerPermission, self).has_object_permission(request, view, obj)
|
||||||
|
|
||||||
|
|
||||||
|
class WebhookKeyPermission(permissions.BasePermission):
|
||||||
|
def has_object_permission(self, request, view, obj):
|
||||||
|
return request.user.can_access(view.model, 'admin', obj, request.data)
|
||||||
|
|
@ -0,0 +1,142 @@
|
||||||
|
# Copyright (c) 2015 Ansible, Inc.
|
||||||
|
# All Rights Reserved.
|
||||||
|
|
||||||
|
from django.utils.safestring import SafeText
|
||||||
|
from prometheus_client.parser import text_string_to_metric_families
|
||||||
|
|
||||||
|
# Django REST Framework
|
||||||
|
from rest_framework import renderers
|
||||||
|
from rest_framework.request import override_method
|
||||||
|
from rest_framework.utils import encoders
|
||||||
|
|
||||||
|
|
||||||
|
class SurrogateEncoder(encoders.JSONEncoder):
|
||||||
|
|
||||||
|
def encode(self, obj):
|
||||||
|
ret = super(SurrogateEncoder, self).encode(obj)
|
||||||
|
try:
|
||||||
|
ret.encode()
|
||||||
|
except UnicodeEncodeError as e:
|
||||||
|
if 'surrogates not allowed' in e.reason:
|
||||||
|
ret = ret.encode('utf-8', 'replace').decode()
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
class DefaultJSONRenderer(renderers.JSONRenderer):
|
||||||
|
|
||||||
|
encoder_class = SurrogateEncoder
|
||||||
|
|
||||||
|
|
||||||
|
class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
|
||||||
|
'''
|
||||||
|
Customizations to the default browsable API renderer.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def get_default_renderer(self, view):
|
||||||
|
renderer = super(BrowsableAPIRenderer, self).get_default_renderer(view)
|
||||||
|
# Always use JSON renderer for browsable OPTIONS response.
|
||||||
|
if view.request.method == 'OPTIONS' and not isinstance(renderer, renderers.JSONRenderer):
|
||||||
|
return renderers.JSONRenderer()
|
||||||
|
return renderer
|
||||||
|
|
||||||
|
def get_content(self, renderer, data, accepted_media_type, renderer_context):
|
||||||
|
if isinstance(data, SafeText):
|
||||||
|
# Older versions of Django (pre-2.0) have a py3 bug which causes
|
||||||
|
# bytestrings marked as "safe" to not actually get _treated_ as
|
||||||
|
# safe; this causes certain embedded strings (like the stdout HTML
|
||||||
|
# view) to be improperly escaped
|
||||||
|
# see: https://github.com/ansible/awx/issues/3108
|
||||||
|
# https://code.djangoproject.com/ticket/28121
|
||||||
|
return data
|
||||||
|
return super(BrowsableAPIRenderer, self).get_content(renderer, data,
|
||||||
|
accepted_media_type,
|
||||||
|
renderer_context)
|
||||||
|
|
||||||
|
def get_context(self, data, accepted_media_type, renderer_context):
|
||||||
|
# Store the associated response status to know how to populate the raw
|
||||||
|
# data form.
|
||||||
|
try:
|
||||||
|
setattr(renderer_context['view'], '_raw_data_response_status', renderer_context['response'].status_code)
|
||||||
|
setattr(renderer_context['view'], '_request', renderer_context['request'])
|
||||||
|
return super(BrowsableAPIRenderer, self).get_context(data, accepted_media_type, renderer_context)
|
||||||
|
finally:
|
||||||
|
delattr(renderer_context['view'], '_raw_data_response_status')
|
||||||
|
delattr(renderer_context['view'], '_request')
|
||||||
|
|
||||||
|
def get_raw_data_form(self, data, view, method, request):
|
||||||
|
# Set a flag on the view to indiciate to the view/serializer that we're
|
||||||
|
# creating a raw data form for the browsable API. Store the original
|
||||||
|
# request method to determine how to populate the raw data form.
|
||||||
|
if request.method in {'OPTIONS', 'DELETE'}:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
setattr(view, '_raw_data_form_marker', True)
|
||||||
|
setattr(view, '_raw_data_request_method', request.method)
|
||||||
|
return super(BrowsableAPIRenderer, self).get_raw_data_form(data, view, method, request)
|
||||||
|
finally:
|
||||||
|
delattr(view, '_raw_data_form_marker')
|
||||||
|
delattr(view, '_raw_data_request_method')
|
||||||
|
|
||||||
|
def get_rendered_html_form(self, data, view, method, request):
|
||||||
|
# Never show auto-generated form (only raw form).
|
||||||
|
obj = getattr(view, 'object', None)
|
||||||
|
if obj is None and hasattr(view, 'get_object') and hasattr(view, 'retrieve'):
|
||||||
|
try:
|
||||||
|
view.object = view.get_object()
|
||||||
|
obj = view.object
|
||||||
|
except Exception:
|
||||||
|
obj = None
|
||||||
|
with override_method(view, request, method) as request:
|
||||||
|
if not self.show_form_for_method(view, method, request, obj):
|
||||||
|
return
|
||||||
|
if method in ('DELETE', 'OPTIONS'):
|
||||||
|
return True # Don't actually need to return a form
|
||||||
|
|
||||||
|
def get_filter_form(self, data, view, request):
|
||||||
|
# Don't show filter form in browsable API.
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
class PlainTextRenderer(renderers.BaseRenderer):
|
||||||
|
|
||||||
|
media_type = 'text/plain'
|
||||||
|
format = 'txt'
|
||||||
|
|
||||||
|
def render(self, data, media_type=None, renderer_context=None):
|
||||||
|
if not isinstance(data, str):
|
||||||
|
data = str(data)
|
||||||
|
return data.encode(self.charset)
|
||||||
|
|
||||||
|
|
||||||
|
class DownloadTextRenderer(PlainTextRenderer):
|
||||||
|
|
||||||
|
format = "txt_download"
|
||||||
|
|
||||||
|
|
||||||
|
class AnsiTextRenderer(PlainTextRenderer):
|
||||||
|
|
||||||
|
media_type = 'text/plain'
|
||||||
|
format = 'ansi'
|
||||||
|
|
||||||
|
|
||||||
|
class AnsiDownloadRenderer(PlainTextRenderer):
|
||||||
|
|
||||||
|
format = "ansi_download"
|
||||||
|
|
||||||
|
|
||||||
|
class PrometheusJSONRenderer(renderers.JSONRenderer):
|
||||||
|
|
||||||
|
def render(self, data, accepted_media_type=None, renderer_context=None):
|
||||||
|
if isinstance(data, dict):
|
||||||
|
# HTTP errors are {'detail': ErrorDetail(string='...', code=...)}
|
||||||
|
return super(PrometheusJSONRenderer, self).render(
|
||||||
|
data, accepted_media_type, renderer_context
|
||||||
|
)
|
||||||
|
parsed_metrics = text_string_to_metric_families(data)
|
||||||
|
data = {}
|
||||||
|
for family in parsed_metrics:
|
||||||
|
for sample in family.samples:
|
||||||
|
data[sample[0]] = {"labels": sample[1], "value": sample[2]}
|
||||||
|
return super(PrometheusJSONRenderer, self).render(
|
||||||
|
data, accepted_media_type, renderer_context
|
||||||
|
)
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,113 @@
|
||||||
|
import json
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from coreapi.document import Object, Link
|
||||||
|
|
||||||
|
from rest_framework import exceptions
|
||||||
|
from rest_framework.permissions import AllowAny
|
||||||
|
from rest_framework.renderers import CoreJSONRenderer
|
||||||
|
from rest_framework.response import Response
|
||||||
|
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
|
||||||
|
from rest_framework.views import APIView
|
||||||
|
|
||||||
|
from rest_framework_swagger import renderers
|
||||||
|
|
||||||
|
|
||||||
|
class SuperUserSchemaGenerator(SchemaGenerator):
|
||||||
|
|
||||||
|
def has_view_permissions(self, path, method, view):
|
||||||
|
#
|
||||||
|
# Generate the Swagger schema as if you were a superuser and
|
||||||
|
# permissions didn't matter; this short-circuits the schema path
|
||||||
|
# discovery to include _all_ potential paths in the API.
|
||||||
|
#
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class AutoSchema(DRFAuthSchema):
|
||||||
|
|
||||||
|
def get_link(self, path, method, base_url):
|
||||||
|
link = super(AutoSchema, self).get_link(path, method, base_url)
|
||||||
|
try:
|
||||||
|
serializer = self.view.get_serializer()
|
||||||
|
except Exception:
|
||||||
|
serializer = None
|
||||||
|
warnings.warn('{}.get_serializer() raised an exception during '
|
||||||
|
'schema generation. Serializer fields will not be '
|
||||||
|
'generated for {} {}.'
|
||||||
|
.format(self.view.__class__.__name__, method, path))
|
||||||
|
|
||||||
|
link.__dict__['deprecated'] = getattr(self.view, 'deprecated', False)
|
||||||
|
|
||||||
|
# auto-generate a topic/tag for the serializer based on its model
|
||||||
|
if hasattr(self.view, 'swagger_topic'):
|
||||||
|
link.__dict__['topic'] = str(self.view.swagger_topic).title()
|
||||||
|
elif serializer and hasattr(serializer, 'Meta'):
|
||||||
|
link.__dict__['topic'] = str(
|
||||||
|
serializer.Meta.model._meta.verbose_name_plural
|
||||||
|
).title()
|
||||||
|
elif hasattr(self.view, 'model'):
|
||||||
|
link.__dict__['topic'] = str(self.view.model._meta.verbose_name_plural).title()
|
||||||
|
else:
|
||||||
|
warnings.warn('Could not determine a Swagger tag for path {}'.format(path))
|
||||||
|
return link
|
||||||
|
|
||||||
|
def get_description(self, path, method):
|
||||||
|
setattr(self.view.request, 'swagger_method', method)
|
||||||
|
description = super(AutoSchema, self).get_description(path, method)
|
||||||
|
return description
|
||||||
|
|
||||||
|
|
||||||
|
class SwaggerSchemaView(APIView):
|
||||||
|
_ignore_model_permissions = True
|
||||||
|
exclude_from_schema = True
|
||||||
|
permission_classes = [AllowAny]
|
||||||
|
renderer_classes = [
|
||||||
|
CoreJSONRenderer,
|
||||||
|
renderers.OpenAPIRenderer,
|
||||||
|
renderers.SwaggerUIRenderer
|
||||||
|
]
|
||||||
|
|
||||||
|
def get(self, request):
|
||||||
|
generator = SuperUserSchemaGenerator(
|
||||||
|
title='Ansible Tower API',
|
||||||
|
patterns=None,
|
||||||
|
urlconf=None
|
||||||
|
)
|
||||||
|
schema = generator.get_schema(request=request)
|
||||||
|
# python core-api doesn't support the deprecation yet, so track it
|
||||||
|
# ourselves and return it in a response header
|
||||||
|
_deprecated = []
|
||||||
|
|
||||||
|
# By default, DRF OpenAPI serialization places all endpoints in
|
||||||
|
# a single node based on their root path (/api). Instead, we want to
|
||||||
|
# group them by topic/tag so that they're categorized in the rendered
|
||||||
|
# output
|
||||||
|
document = schema._data.pop('api')
|
||||||
|
for path, node in document.items():
|
||||||
|
if isinstance(node, Object):
|
||||||
|
for action in node.values():
|
||||||
|
topic = getattr(action, 'topic', None)
|
||||||
|
if topic:
|
||||||
|
schema._data.setdefault(topic, Object())
|
||||||
|
schema._data[topic]._data[path] = node
|
||||||
|
|
||||||
|
if isinstance(action, Object):
|
||||||
|
for link in action.links.values():
|
||||||
|
if link.deprecated:
|
||||||
|
_deprecated.append(link.url)
|
||||||
|
elif isinstance(node, Link):
|
||||||
|
topic = getattr(node, 'topic', None)
|
||||||
|
if topic:
|
||||||
|
schema._data.setdefault(topic, Object())
|
||||||
|
schema._data[topic]._data[path] = node
|
||||||
|
|
||||||
|
if not schema:
|
||||||
|
raise exceptions.ValidationError(
|
||||||
|
'The schema generator did not return a schema Document'
|
||||||
|
)
|
||||||
|
|
||||||
|
return Response(
|
||||||
|
schema,
|
||||||
|
headers={'X-Deprecated-Paths': json.dumps(_deprecated)}
|
||||||
|
)
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
> _This resource has been deprecated and will be removed in a future release_
|
||||||
|
|
@ -0,0 +1,146 @@
|
||||||
|
The resulting data structure contains:
|
||||||
|
|
||||||
|
{
|
||||||
|
"count": 99,
|
||||||
|
"next": null,
|
||||||
|
"previous": null,
|
||||||
|
"results": [
|
||||||
|
...
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
The `count` field indicates the total number of {{ model_verbose_name_plural }}
|
||||||
|
found for the given query. The `next` and `previous` fields provides links to
|
||||||
|
additional results if there are more than will fit on a single page. The
|
||||||
|
`results` list contains zero or more {{ model_verbose_name }} records.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
Each {{ model_verbose_name }} data structure includes the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
|
||||||
|
## Sorting
|
||||||
|
|
||||||
|
To specify that {{ model_verbose_name_plural }} are returned in a particular
|
||||||
|
order, use the `order_by` query string parameter on the GET request.
|
||||||
|
|
||||||
|
?order_by={{ order_field }}
|
||||||
|
|
||||||
|
Prefix the field name with a dash `-` to sort in reverse:
|
||||||
|
|
||||||
|
?order_by=-{{ order_field }}
|
||||||
|
|
||||||
|
Multiple sorting fields may be specified by separating the field names with a
|
||||||
|
comma `,`:
|
||||||
|
|
||||||
|
?order_by={{ order_field }},some_other_field
|
||||||
|
|
||||||
|
## Pagination
|
||||||
|
|
||||||
|
Use the `page_size` query string parameter to change the number of results
|
||||||
|
returned for each request. Use the `page` query string parameter to retrieve
|
||||||
|
a particular page of results.
|
||||||
|
|
||||||
|
?page_size=100&page=2
|
||||||
|
|
||||||
|
The `previous` and `next` links returned with the results will set these query
|
||||||
|
string parameters automatically.
|
||||||
|
|
||||||
|
## Searching
|
||||||
|
|
||||||
|
Use the `search` query string parameter to perform a case-insensitive search
|
||||||
|
within all designated text fields of a model.
|
||||||
|
|
||||||
|
?search=findme
|
||||||
|
|
||||||
|
(_Added in Ansible Tower 3.1.0_) Search across related fields:
|
||||||
|
|
||||||
|
?related__search=findme
|
||||||
|
|
||||||
|
Note: If you want to provide more than one search term, multiple
|
||||||
|
search fields with the same key, like `?related__search=foo&related__search=bar`,
|
||||||
|
will be ORed together. Terms separated by commas, like `?related__search=foo,bar`
|
||||||
|
will be ANDed together.
|
||||||
|
|
||||||
|
## Filtering
|
||||||
|
|
||||||
|
Any additional query string parameters may be used to filter the list of
|
||||||
|
results returned to those matching a given value. Only fields and relations
|
||||||
|
that exist in the database may be used for filtering. Any special characters
|
||||||
|
in the specified value should be url-encoded. For example:
|
||||||
|
|
||||||
|
?field=value%20xyz
|
||||||
|
|
||||||
|
Fields may also span relations, only for fields and relationships defined in
|
||||||
|
the database:
|
||||||
|
|
||||||
|
?other__field=value
|
||||||
|
|
||||||
|
To exclude results matching certain criteria, prefix the field parameter with
|
||||||
|
`not__`:
|
||||||
|
|
||||||
|
?not__field=value
|
||||||
|
|
||||||
|
By default, all query string filters are AND'ed together, so
|
||||||
|
only the results matching *all* filters will be returned. To combine results
|
||||||
|
matching *any* one of multiple criteria, prefix each query string parameter
|
||||||
|
with `or__`:
|
||||||
|
|
||||||
|
?or__field=value&or__field=othervalue
|
||||||
|
?or__not__field=value&or__field=othervalue
|
||||||
|
|
||||||
|
(_Added in Ansible Tower 1.4.5_) The default AND filtering applies all filters
|
||||||
|
simultaneously to each related object being filtered across database
|
||||||
|
relationships. The chain filter instead applies filters separately for each
|
||||||
|
related object. To use, prefix the query string parameter with `chain__`:
|
||||||
|
|
||||||
|
?chain__related__field=value&chain__related__field2=othervalue
|
||||||
|
?chain__not__related__field=value&chain__related__field2=othervalue
|
||||||
|
|
||||||
|
If the first query above were written as
|
||||||
|
`?related__field=value&related__field2=othervalue`, it would return only the
|
||||||
|
primary objects where the *same* related object satisfied both conditions. As
|
||||||
|
written using the chain filter, it would return the intersection of primary
|
||||||
|
objects matching each condition.
|
||||||
|
|
||||||
|
Field lookups may also be used for more advanced queries, by appending the
|
||||||
|
lookup to the field name:
|
||||||
|
|
||||||
|
?field__lookup=value
|
||||||
|
|
||||||
|
The following field lookups are supported:
|
||||||
|
|
||||||
|
* `exact`: Exact match (default lookup if not specified).
|
||||||
|
* `iexact`: Case-insensitive version of `exact`.
|
||||||
|
* `contains`: Field contains value.
|
||||||
|
* `icontains`: Case-insensitive version of `contains`.
|
||||||
|
* `startswith`: Field starts with value.
|
||||||
|
* `istartswith`: Case-insensitive version of `startswith`.
|
||||||
|
* `endswith`: Field ends with value.
|
||||||
|
* `iendswith`: Case-insensitive version of `endswith`.
|
||||||
|
* `regex`: Field matches the given regular expression.
|
||||||
|
* `iregex`: Case-insensitive version of `regex`.
|
||||||
|
* `gt`: Greater than comparison.
|
||||||
|
* `gte`: Greater than or equal to comparison.
|
||||||
|
* `lt`: Less than comparison.
|
||||||
|
* `lte`: Less than or equal to comparison.
|
||||||
|
* `isnull`: Check whether the given field or related object is null; expects a
|
||||||
|
boolean value.
|
||||||
|
* `in`: Check whether the given field's value is present in the list provided;
|
||||||
|
expects a list of items.
|
||||||
|
|
||||||
|
Boolean values may be specified as `True` or `1` for true, `False` or `0` for
|
||||||
|
false (both case-insensitive).
|
||||||
|
|
||||||
|
Null values may be specified as `None` or `Null` (both case-insensitive),
|
||||||
|
though it is preferred to use the `isnull` lookup to explicitly check for null
|
||||||
|
values.
|
||||||
|
|
||||||
|
Lists (for the `in` lookup) may be specified as a comma-separated list of
|
||||||
|
values.
|
||||||
|
|
||||||
|
(_Added in Ansible Tower 3.1.0_) Filtering based on the requesting user's
|
||||||
|
level of access by query string parameter.
|
||||||
|
|
||||||
|
* `role_level`: Level of role to filter on, such as `admin_role`
|
||||||
|
|
@ -0,0 +1,7 @@
|
||||||
|
{% for fn, fm in serializer_fields.items %}{% spaceless %}
|
||||||
|
{% if write_only and fm.read_only or not write_only and fm.write_only or write_only and fn == parent_key %}
|
||||||
|
{% else %}
|
||||||
|
* `{{ fn }}`: {{ fm.help_text|capfirst }} ({{ fm.type }}{% if write_only and fm.required %}, required{% endif %}{% if write_only and fm.read_only %}, read-only{% endif %}{% if write_only and not fm.choices and not fm.required %}, default=`{% if fm.type == "string" or fm.type == "email" %}"{% firstof fm.default "" %}"{% else %}{% if fm.type == "field" and not fm.default %}None{% else %}{{ fm.default }}{% endif %}{% endif %}`{% endif %}){% if fm.choices %}{% for c in fm.choices %}
|
||||||
|
- `{% if c.0 == "" %}""{% else %}{{ c.0 }}{% endif %}`{% if c.1 != c.0 %}: {{ c.1 }}{% endif %}{% if write_only and c.0 == fm.default %} (default){% endif %}{% endfor %}{% endif %}{% endif %}
|
||||||
|
{% endspaceless %}
|
||||||
|
{% endfor %}
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
The following lists the expected format and details of our rrules:
|
||||||
|
|
||||||
|
* DTSTART is required and must follow the following format: DTSTART:YYYYMMDDTHHMMSSZ
|
||||||
|
* DTSTART is expected to be in UTC
|
||||||
|
* INTERVAL is required
|
||||||
|
* SECONDLY is not supported
|
||||||
|
* RRULE must precede the rule statements
|
||||||
|
* BYDAY is supported but not BYDAY with a numerical prefix
|
||||||
|
* BYYEARDAY and BYWEEKNO are not supported
|
||||||
|
* Only one rrule statement per schedule is supported
|
||||||
|
* COUNT must be < 1000
|
||||||
|
|
||||||
|
Here are some example rrules:
|
||||||
|
|
||||||
|
"DTSTART:20500331T055000Z RRULE:FREQ=MINUTELY;INTERVAL=10;COUNT=5"
|
||||||
|
"DTSTART:20240331T075000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=MINUTELY;INTERVAL=1;UNTIL=20230401T075000Z"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=MO,WE,FR"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=WEEKLY;INTERVAL=5;BYDAY=MO"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=MONTHLY;INTERVAL=1;BYMONTHDAY=6"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=MONTHLY;INTERVAL=1;BYSETPOS=4;BYDAY=SU"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=MONTHLY;INTERVAL=1;BYSETPOS=-1;BYDAY=MO,TU,WE,TH,FR"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=MONTHLY;INTERVAL=1;BYSETPOS=-1;BYDAY=MO,TU,WE,TH,FR,SA,SU"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=YEARLY;INTERVAL=1;BYMONTH=4;BYMONTHDAY=1"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=YEARLY;INTERVAL=1;BYSETPOS=-1;BYMONTH=8;BYDAY=SU"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=WEEKLY;INTERVAL=1;UNTIL=20230401T075000Z;BYDAY=MO,WE,FR"
|
||||||
|
"DTSTART:20140331T075000Z RRULE:FREQ=HOURLY;INTERVAL=1;UNTIL=20230610T075000Z"
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
|
||||||
|
POST requests to this resource must include a proper `rrule` value following
|
||||||
|
a particular format and conforming to subset of allowed rules.
|
||||||
|
|
||||||
|
{% include "api/_schedule_detail.md" %}
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
Relaunch an Ad Hoc Command:
|
||||||
|
|
||||||
|
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
|
||||||
|
|
@ -0,0 +1,114 @@
|
||||||
|
# Token Handling using OAuth2
|
||||||
|
|
||||||
|
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
|
||||||
|
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
|
||||||
|
support HTTP GET. The endpoints here strictly follow
|
||||||
|
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
|
||||||
|
reference. Note AWX net location default to `http://localhost:8013` in examples:
|
||||||
|
|
||||||
|
|
||||||
|
## Create Token for an Application using Authorization code grant type
|
||||||
|
Given an application "AuthCodeApp" of grant type `authorization-code`,
|
||||||
|
from the client app, the user makes a GET to the Authorize endpoint with
|
||||||
|
|
||||||
|
* `response_type`
|
||||||
|
* `client_id`
|
||||||
|
* `redirect_uris`
|
||||||
|
* `scope`
|
||||||
|
|
||||||
|
AWX will respond with the authorization `code` and `state`
|
||||||
|
to the redirect_uri specified in the application. The client application will then make a POST to the
|
||||||
|
`api/o/token/` endpoint on AWX with
|
||||||
|
|
||||||
|
* `code`
|
||||||
|
* `client_id`
|
||||||
|
* `client_secret`
|
||||||
|
* `grant_type`
|
||||||
|
* `redirect_uri`
|
||||||
|
|
||||||
|
AWX will respond with the `access_token`, `token_type`, `refresh_token`, and `expires_in`. For more
|
||||||
|
information on testing this flow, refer to [django-oauth-toolkit](http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server).
|
||||||
|
|
||||||
|
|
||||||
|
## Create Token for an Application using Password grant type
|
||||||
|
|
||||||
|
Log in is not required for `password` grant type, so a simple `curl` can be used to acquire a personal access token
|
||||||
|
via `/api/o/token/` with
|
||||||
|
|
||||||
|
* `grant_type`: Required to be "password"
|
||||||
|
* `username`
|
||||||
|
* `password`
|
||||||
|
* `client_id`: Associated application must have grant_type "password"
|
||||||
|
* `client_secret`
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-d "grant_type=password&username=<username>&password=<password>&scope=read" \
|
||||||
|
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569e
|
||||||
|
IaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
|
||||||
|
http://localhost:8013/api/o/token/ -i
|
||||||
|
```
|
||||||
|
In the above post request, parameters `username` and `password` are username and password of the related
|
||||||
|
AWX user of the underlying application, and the authentication information is of format
|
||||||
|
`<client_id>:<client_secret>`, where `client_id` and `client_secret` are the corresponding fields of
|
||||||
|
underlying application.
|
||||||
|
|
||||||
|
Upon success, access token, refresh token and other information are given in the response body in JSON
|
||||||
|
format:
|
||||||
|
|
||||||
|
```text
|
||||||
|
{
|
||||||
|
"access_token": "9epHOqHhnXUcgYK8QanOmUQPSgX92g",
|
||||||
|
"token_type": "Bearer",
|
||||||
|
"expires_in": 31536000000,
|
||||||
|
"refresh_token": "jMRX6QvzOTf046KHee3TU5mT3nyXsz",
|
||||||
|
"scope": "read"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Refresh an existing access token
|
||||||
|
|
||||||
|
The `/api/o/token/` endpoint is used for refreshing access token:
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
|
||||||
|
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
|
||||||
|
http://localhost:8013/api/o/token/ -i
|
||||||
|
```
|
||||||
|
In the above post request, `refresh_token` is provided by `refresh_token` field of the access token
|
||||||
|
above. The authentication information is of format `<client_id>:<client_secret>`, where `client_id`
|
||||||
|
and `client_secret` are the corresponding fields of underlying related application of the access token.
|
||||||
|
|
||||||
|
Upon success, the new (refreshed) access token with the same scope information as the previous one is
|
||||||
|
given in the response body in JSON format:
|
||||||
|
```text
|
||||||
|
{
|
||||||
|
"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR",
|
||||||
|
"token_type": "Bearer",
|
||||||
|
"expires_in": 31536000000,
|
||||||
|
"refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT",
|
||||||
|
"scope": "read write"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Internally, the refresh operation deletes the existing token and a new token is created immediately
|
||||||
|
after, with information like scope and related application identical to the original one. We can
|
||||||
|
verify by checking the new token is present at the `api/v2/tokens` endpoint.
|
||||||
|
|
||||||
|
## Revoke an access token
|
||||||
|
Revoking an access token is the same as deleting the token resource object.
|
||||||
|
Revoking is done by POSTing to `/api/o/revoke_token/` with the token to revoke as parameter:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
|
||||||
|
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||||
|
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
|
||||||
|
http://localhost:8013/api/o/revoke_token/ -i
|
||||||
|
```
|
||||||
|
`200 OK` means a successful delete.
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
The root of the REST API.
|
||||||
|
|
||||||
|
Make a GET request to this resource to obtain information about the available
|
||||||
|
API versions.
|
||||||
|
|
@ -0,0 +1,33 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Site configuration settings and general information
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve the configuration containing
|
||||||
|
the following fields (some fields may not be visible to all users):
|
||||||
|
|
||||||
|
* `project_base_dir`: Path on the server where projects and playbooks are \
|
||||||
|
stored.
|
||||||
|
* `project_local_paths`: List of directories beneath `project_base_dir` to
|
||||||
|
use when creating/editing a project.
|
||||||
|
* `time_zone`: The configured time zone for the server.
|
||||||
|
* `license_info`: Information about the current license.
|
||||||
|
* `version`: Version of Ansible Tower package installed.
|
||||||
|
* `eula`: The current End-User License Agreement
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth POST %}
|
||||||
|
# Install or update an existing license
|
||||||
|
|
||||||
|
(_New in Ansible Tower 2.0.0_) Make a POST request to this resource as a super
|
||||||
|
user to install or update the existing license. The license data itself can
|
||||||
|
be POSTed as a normal json data structure.
|
||||||
|
|
||||||
|
(_New in Ansible Tower 2.1.1_) The POST must include a `eula_accepted` boolean
|
||||||
|
element indicating acceptance of the End-User License Agreement.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth DELETE %}
|
||||||
|
# Delete an existing license
|
||||||
|
|
||||||
|
(_New in Ansible Tower 2.0.0_) Make a DELETE request to this resource as a super
|
||||||
|
user to delete the existing license
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
Version 1 of the Ansible Tower REST API.
|
||||||
|
|
||||||
|
Make a GET request to this resource to obtain a list of all child resources
|
||||||
|
available via the API.
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
Version 2 of the REST API.
|
||||||
|
|
||||||
|
Make a GET request to this resource to obtain a list of all child resources
|
||||||
|
available via the API.
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
{{ docstring }}
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Retrieve {{ model_verbose_name|title }} Variable Data:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve all variables defined for a
|
||||||
|
{{ model_verbose_name }}.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PUT PATCH %}
|
||||||
|
# Update {{ model_verbose_name|title }} Variable Data:
|
||||||
|
|
||||||
|
Make a PUT or PATCH request to this resource to update variables defined for a
|
||||||
|
{{ model_verbose_name }}.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,40 @@
|
||||||
|
Make a GET request to this resource to retrieve aggregate statistics about inventory suitable for graphing.
|
||||||
|
|
||||||
|
Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or
|
||||||
|
failed status of hosts which have run jobs within an Inventory.
|
||||||
|
|
||||||
|
## Parmeters and Filtering
|
||||||
|
|
||||||
|
The `period` of the data can be adjusted with:
|
||||||
|
|
||||||
|
?period=month
|
||||||
|
|
||||||
|
Where `month` can be replaced with `week`, or `day`. `month` is the default.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
Data about the number of hosts will be returned in the following format:
|
||||||
|
|
||||||
|
"hosts": [
|
||||||
|
[
|
||||||
|
1402808400.0,
|
||||||
|
86743
|
||||||
|
], ...]
|
||||||
|
|
||||||
|
Each element contains an epoch timestamp represented in seconds and a numerical value indicating
|
||||||
|
the number of hosts that exist at a given moment
|
||||||
|
|
||||||
|
Data about failed and successfull hosts by inventory will be given as:
|
||||||
|
|
||||||
|
{
|
||||||
|
"sources": [
|
||||||
|
{
|
||||||
|
"successful": 21,
|
||||||
|
"source": "ec2",
|
||||||
|
"name": "aws (Test Inventory)",
|
||||||
|
"failed": 0
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"id": 2,
|
||||||
|
"name": "Test Inventory"
|
||||||
|
},
|
||||||
|
|
@ -0,0 +1,37 @@
|
||||||
|
# View Statistics for Job Runs
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing.
|
||||||
|
|
||||||
|
## Parmeters and Filtering
|
||||||
|
|
||||||
|
The `period` of the data can be adjusted with:
|
||||||
|
|
||||||
|
?period=month
|
||||||
|
|
||||||
|
Where `month` can be replaced with `week`, `two_weeks`, or `day`. `month` is the default.
|
||||||
|
|
||||||
|
The type of job can be filtered with:
|
||||||
|
|
||||||
|
?job_type=all
|
||||||
|
|
||||||
|
Where `all` can be replaced with `inv_sync`, `playbook_run` or `scm_update`. `all` is the default.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
Data will be returned in the following format:
|
||||||
|
|
||||||
|
"jobs": {
|
||||||
|
"successful": [
|
||||||
|
[
|
||||||
|
1402808400.0,
|
||||||
|
9
|
||||||
|
], ... ],
|
||||||
|
"failed": [
|
||||||
|
[
|
||||||
|
1402808400.0,
|
||||||
|
3
|
||||||
|
], ... ]
|
||||||
|
}
|
||||||
|
|
||||||
|
Each element contains an epoch timestamp represented in seconds and a numerical value indicating
|
||||||
|
the number of events during that time period
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
Make a GET request to this resource to retrieve aggregate statistics for Tower.
|
||||||
|
|
@ -0,0 +1,7 @@
|
||||||
|
# List All {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve a list of all
|
||||||
|
{{ model_verbose_name_plural }} directly or indirectly belonging to this
|
||||||
|
{{ parent_model_verbose_name }}.
|
||||||
|
|
||||||
|
{% include "api/_list_common.md" %}
|
||||||
|
|
@ -0,0 +1,7 @@
|
||||||
|
# List Potential Child Groups for {{ parent_model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve a list of
|
||||||
|
{{ model_verbose_name_plural }} available to be added as children of the
|
||||||
|
current {{ parent_model_verbose_name }}.
|
||||||
|
|
||||||
|
{% include "api/_list_common.md" %}
|
||||||
|
|
@ -0,0 +1,7 @@
|
||||||
|
# List All {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve a list of all
|
||||||
|
{{ model_verbose_name_plural }} of which the selected
|
||||||
|
{{ parent_model_verbose_name }} is directly or indirectly a member.
|
||||||
|
|
||||||
|
{% include "api/_list_common.md" %}
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# List Fact Scans for a Host Specific Host Scan
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve system tracking data for a particular scan
|
||||||
|
|
||||||
|
You may filter by datetime:
|
||||||
|
|
||||||
|
`?datetime=2015-06-01`
|
||||||
|
|
||||||
|
and module
|
||||||
|
|
||||||
|
`?datetime=2015-06-01&module=ansible`
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# List Fact Scans for a Host by Module and Date
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve system tracking scans by module and date/time
|
||||||
|
|
||||||
|
You may filter scan runs using the `from` and `to` properties:
|
||||||
|
|
||||||
|
`?from=2015-06-01%2012:00:00&to=2015-06-03`
|
||||||
|
|
||||||
|
You may also filter by module
|
||||||
|
|
||||||
|
`?module=packages`
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
# List Red Hat Insights for a Host
|
||||||
|
|
@ -0,0 +1,15 @@
|
||||||
|
{% include "api/list_api_view.md" %}
|
||||||
|
|
||||||
|
`host_filter` is available on this endpoint. The filter supports: relational queries, `and` `or` boolean logic, as well as expression grouping via `()`.
|
||||||
|
|
||||||
|
?host_filter=name=my_host
|
||||||
|
?host_filter=name="my host" or name=my_host
|
||||||
|
?host_filter=groups__name="my group"
|
||||||
|
?host_filter=name=my_host and groups__name="my group"
|
||||||
|
?host_filter=name=my_host and groups__name="my group"
|
||||||
|
?host_filter=(name=my_host and groups__name="my group") or (name=my_host2 and groups__name=my_group2)
|
||||||
|
|
||||||
|
`host_filter` can also be used to query JSON data in the related `ansible_facts`. `__` may be used to traverse JSON dictionaries. `[]` may be used to traverse JSON arrays.
|
||||||
|
|
||||||
|
?host_filter=ansible_facts__ansible_processor_vcpus=8
|
||||||
|
?host_filter=ansible_facts__ansible_processor_vcpus=8 and name="my_host" and ansible_facts__ansible_lo__ipv6[]__scope=host
|
||||||
|
|
@ -0,0 +1,31 @@
|
||||||
|
# Update Inventory Sources
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if any of the inventory sources for
|
||||||
|
this inventory can be updated. The response will include the following fields for each
|
||||||
|
inventory source:
|
||||||
|
|
||||||
|
* `inventory_source`: ID of the inventory_source
|
||||||
|
(integer, read-only)
|
||||||
|
* `can_update`: Flag indicating if this inventory source can be updated
|
||||||
|
(boolean, read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to update the inventory sources. The response
|
||||||
|
status code will be a 202. The response will contain the follow fields for each of the individual
|
||||||
|
inventory sources:
|
||||||
|
|
||||||
|
* `status`: `started` or message why the update could not be started.
|
||||||
|
(string, read-only)
|
||||||
|
* `inventory_update`: ID of the inventory update job that was started.
|
||||||
|
(integer, read-only)
|
||||||
|
* `project_update`: ID of the project update job that was started if this inventory source is an SCM source.
|
||||||
|
(interger, read-only, optional)
|
||||||
|
|
||||||
|
Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories.
|
||||||
|
|
||||||
|
|
||||||
|
Response code from this action will be:
|
||||||
|
|
||||||
|
- 200 if all inventory source updates were successful
|
||||||
|
- 202 if some inventory source updates were successful, but some failed
|
||||||
|
- 400 if all of the inventory source updates failed
|
||||||
|
- 400 if there are no inventory sources in the inventory
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# List Root {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve a list of root (top-level)
|
||||||
|
{{ model_verbose_name_plural }} associated with this
|
||||||
|
{{ parent_model_verbose_name }}.
|
||||||
|
|
||||||
|
{% include "api/_list_common.md" %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,39 @@
|
||||||
|
Generate inventory group and host data as needed for an inventory script.
|
||||||
|
|
||||||
|
Refer to [Dynamic Inventory](http://docs.ansible.com/intro_dynamic_inventory.html)
|
||||||
|
for more information on inventory scripts.
|
||||||
|
|
||||||
|
## List Response
|
||||||
|
|
||||||
|
Make a GET request to this resource without query parameters to retrieve a JSON
|
||||||
|
object containing groups, including the hosts, children and variables for each
|
||||||
|
group. The response data is equivalent to that returned by passing the
|
||||||
|
`--list` argument to an inventory script.
|
||||||
|
|
||||||
|
Specify a query string of `?hostvars=1` to retrieve the JSON
|
||||||
|
object above including all host variables. The `['_meta']['hostvars']` object
|
||||||
|
in the response contains an entry for each host with its variables. This
|
||||||
|
response format can be used with Ansible 1.3 and later to avoid making a
|
||||||
|
separate API request for each host. Refer to
|
||||||
|
[Tuning the External Inventory Script](http://docs.ansible.com/developing_inventory.html#tuning-the-external-inventory-script)
|
||||||
|
for more information on this feature.
|
||||||
|
|
||||||
|
By default, the inventory script will only return hosts that
|
||||||
|
are enabled in the inventory. This feature allows disabled hosts to be skipped
|
||||||
|
when running jobs without removing them from the inventory. Specify a query
|
||||||
|
string of `?all=1` to return all hosts, including disabled ones.
|
||||||
|
|
||||||
|
Specify a query string of `?towervars=1` to add variables
|
||||||
|
to the hostvars of each host that specifies its enabled state and database ID.
|
||||||
|
|
||||||
|
Specify a query string of `?subset=slice2of5` to produce an inventory that
|
||||||
|
has a restricted number of hosts according to the rules of job slicing.
|
||||||
|
|
||||||
|
To apply multiple query strings, join them with the `&` character, like `?hostvars=1&all=1`.
|
||||||
|
|
||||||
|
## Host Response
|
||||||
|
|
||||||
|
Make a GET request to this resource with a query string similar to
|
||||||
|
`?host=HOSTNAME` to retrieve a JSON object containing host variables for the
|
||||||
|
specified host. The response data is equivalent to that returned by passing
|
||||||
|
the `--host HOSTNAME` argument to an inventory script.
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# Cancel Inventory Update
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the inventory update can be
|
||||||
|
canceled. The response will include the following field:
|
||||||
|
|
||||||
|
* `can_cancel`: Indicates whether this update can be canceled (boolean,
|
||||||
|
read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to cancel a pending or running inventory
|
||||||
|
update. The response status code will be 202 if successful, or 405 if the
|
||||||
|
update cannot be canceled.
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
{% extends "api/sub_list_create_api_view.md" %}
|
||||||
|
|
||||||
|
{% block post_create %}
|
||||||
|
{% include "api/_schedule_list_common.md" %}
|
||||||
|
{% endblock %}
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# Update Inventory Source
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the group can be updated
|
||||||
|
from its inventory source. The response will include the following field:
|
||||||
|
|
||||||
|
* `can_update`: Flag indicating if this inventory source can be updated
|
||||||
|
(boolean, read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to update the inventory source. If
|
||||||
|
successful, the response status code will be 202. If the inventory source is
|
||||||
|
not defined or cannot be updated, a 405 status code will be returned.
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
# Group Tree for {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve a hierarchical view of groups
|
||||||
|
associated with the selected {{ model_verbose_name }}.
|
||||||
|
|
||||||
|
The resulting data structure contains a list of root groups, with each group
|
||||||
|
also containing a list of its children.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
Each group data structure includes the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
|
@ -0,0 +1,15 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Determine if a Job can be canceled
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the job can be canceled.
|
||||||
|
The response will include the following field:
|
||||||
|
|
||||||
|
* `can_cancel`: Indicates whether this job can be canceled (boolean, read-only)
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth POST %}
|
||||||
|
# Cancel a Job
|
||||||
|
Make a POST request to this resource to cancel a pending or running job. The
|
||||||
|
response status code will be 202 if successful, or 405 if the job cannot be
|
||||||
|
canceled.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
Create a schedule based on a job:
|
||||||
|
|
||||||
|
Make a POST request to this endpoint to create a schedule that launches
|
||||||
|
the job template that launched this job, and uses the same
|
||||||
|
parameters that the job was launched with. These parameters include all
|
||||||
|
"prompted" resources such as `extra_vars`, `inventory`, `limit`, etc.
|
||||||
|
|
||||||
|
Jobs that were launched with user-provided passwords cannot have a schedule
|
||||||
|
created from them.
|
||||||
|
|
||||||
|
Make a GET request for information about what those prompts are and
|
||||||
|
whether or not a schedule can be created.
|
||||||
|
|
@ -0,0 +1,25 @@
|
||||||
|
Make a GET request to retrieve the list of aggregated play data associated with a job
|
||||||
|
|
||||||
|
## Filtering
|
||||||
|
|
||||||
|
This endpoints supports a limited filtering subset:
|
||||||
|
|
||||||
|
?event_id__in=1,2,3
|
||||||
|
|
||||||
|
Will show only the given ids.
|
||||||
|
|
||||||
|
?event_id__gt=1
|
||||||
|
|
||||||
|
Will show ids greater than the given one.
|
||||||
|
|
||||||
|
?event_id__lt=3
|
||||||
|
|
||||||
|
Will show ids less than the given one.
|
||||||
|
|
||||||
|
?failed=true
|
||||||
|
|
||||||
|
Will show only failed plays. Alternatively `false` may be used.
|
||||||
|
|
||||||
|
?play__icontains=test
|
||||||
|
|
||||||
|
Will filter plays matching the substring `test`
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
Make a GET request to retrieve the list of aggregated task data associated with the play given by event_id.
|
||||||
|
|
||||||
|
`event_id` is a required query parameter and must match the job event id of the parent play in order to receive the list of tasks associated with the play
|
||||||
|
|
||||||
|
## Filtering
|
||||||
|
|
||||||
|
This endpoints supports a limited filtering subset:
|
||||||
|
|
||||||
|
?event_id__in=1,2,3
|
||||||
|
|
||||||
|
Will show only the given task ids under the play given by `event_id`.
|
||||||
|
|
||||||
|
?event_id__gt=1
|
||||||
|
|
||||||
|
Will show ids greater than the given one.
|
||||||
|
|
||||||
|
?event_id__lt=3
|
||||||
|
|
||||||
|
Will show ids less than the given one.
|
||||||
|
|
||||||
|
?failed=true
|
||||||
|
|
||||||
|
Will show only failed plays. Alternatively `false` may be used.
|
||||||
|
|
||||||
|
?task__icontains=test
|
||||||
|
|
||||||
|
Will filter tasks matching the substring `test`
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
{% include "api/list_create_api_view.md" %}
|
||||||
|
|
||||||
|
If the `job_template` field is specified, any fields not explicitly provided
|
||||||
|
for the new job (except `name` and `description`) will use the default values
|
||||||
|
from the job template.
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
Relaunch a Job:
|
||||||
|
|
||||||
|
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
|
||||||
|
|
@ -0,0 +1,21 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Determine if a Job can be started
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the job can be started and
|
||||||
|
whether any passwords are required to start the job. The response will include
|
||||||
|
the following fields:
|
||||||
|
|
||||||
|
* `can_start`: Flag indicating if this job can be started (boolean, read-only)
|
||||||
|
* `passwords_needed_to_start`: Password names required to start the job (array,
|
||||||
|
read-only)
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth POST %}
|
||||||
|
# Start a Job
|
||||||
|
Make a POST request to this resource to start the job. If any passwords are
|
||||||
|
required, they must be passed via POST data.
|
||||||
|
|
||||||
|
If successful, the response status code will be 202. If any required passwords
|
||||||
|
are not provided, a 400 status code will be returned. If the job cannot be
|
||||||
|
started, a 405 status code will be returned.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,41 @@
|
||||||
|
The job template callback allows for ephemeral hosts to launch a new job.
|
||||||
|
|
||||||
|
Configure a host to POST to this resource, passing the `host_config_key`
|
||||||
|
parameter, to start a new job limited to only the requesting host. In the
|
||||||
|
examples below, replace the `N` parameter with the `id` of the job template
|
||||||
|
and the `HOST_CONFIG_KEY` with the `host_config_key` associated with the
|
||||||
|
job template.
|
||||||
|
|
||||||
|
For example, using curl:
|
||||||
|
|
||||||
|
curl -H "Content-Type: application/json" -d '{"host_config_key": "HOST_CONFIG_KEY"}' http://server/api/v2/job_templates/N/callback/
|
||||||
|
|
||||||
|
Or using wget:
|
||||||
|
|
||||||
|
wget -O /dev/null --post-data='{"host_config_key": "HOST_CONFIG_KEY"}' --header=Content-Type:application/json http://server/api/v2/job_templates/N/callback/
|
||||||
|
|
||||||
|
You may also pass `extra_vars` to the callback:
|
||||||
|
|
||||||
|
curl -H "Content-Type: application/json" -d '{"host_config_key": "HOST_CONFIG_KEY", "extra_vars": {"key": "value"}}' http://server/api/v2/job_templates/N/callback/
|
||||||
|
|
||||||
|
The response will return status 202 if the request is valid, 403 for an
|
||||||
|
invalid host config key, or 400 if the host cannot be determined from the
|
||||||
|
address making the request.
|
||||||
|
|
||||||
|
_(New in Ansible Tower 2.0.0)_ If the associated inventory has the
|
||||||
|
`update_on_launch` flag set and if the `update_cache_timeout` has expired, the
|
||||||
|
callback will perform an inventory sync to find a matching host.
|
||||||
|
|
||||||
|
A GET request may be used to verify that the correct host will be selected.
|
||||||
|
This request must authenticate as a valid user with permission to edit the
|
||||||
|
job template. For example:
|
||||||
|
|
||||||
|
curl http://user:password@server/api/v2/job_templates/N/callback/
|
||||||
|
|
||||||
|
The response will include the host config key as well as the host name(s)
|
||||||
|
that would match the request:
|
||||||
|
|
||||||
|
{
|
||||||
|
"host_config_key": "HOST_CONFIG_KEY",
|
||||||
|
"matching_hosts": ["hostname"]
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,6 @@
|
||||||
|
{% extends "api/sub_list_create_api_view.md" %}
|
||||||
|
|
||||||
|
{% block post_create %}
|
||||||
|
Any fields not explicitly provided for the new job (except `name` and
|
||||||
|
`description`) will use the default values from the job template.
|
||||||
|
{% endblock %}
|
||||||
|
|
@ -0,0 +1,7 @@
|
||||||
|
{% include "api/sub_list_create_api_view.md" %}
|
||||||
|
|
||||||
|
Labels not associated with any other resources are deleted. A label can become disassociated with a resource as a result of 3 events.
|
||||||
|
|
||||||
|
1. A label is explicitly disassociated with a related job template
|
||||||
|
2. A job is deleted with labels
|
||||||
|
3. A cleanup job deletes a job with labels
|
||||||
|
|
@ -0,0 +1,43 @@
|
||||||
|
Launch a Job Template:
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the job_template can be
|
||||||
|
launched and whether any passwords are required to launch the job_template.
|
||||||
|
The response will include the following fields:
|
||||||
|
|
||||||
|
* `ask_variables_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for variables upon launch (boolean, read-only)
|
||||||
|
* `ask_tags_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for tags upon launch (boolean, read-only)
|
||||||
|
* `ask_skip_tags_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for skip_tags upon launch (boolean, read-only)
|
||||||
|
* `ask_job_type_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for job_type upon launch (boolean, read-only)
|
||||||
|
* `ask_limit_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for limit upon launch (boolean, read-only)
|
||||||
|
* `ask_inventory_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for inventory upon launch (boolean, read-only)
|
||||||
|
* `ask_credential_on_launch`: Flag indicating whether the job_template is
|
||||||
|
configured to prompt for credential upon launch (boolean, read-only)
|
||||||
|
* `can_start_without_user_input`: Flag indicating if the job_template can be
|
||||||
|
launched without user-input (boolean, read-only)
|
||||||
|
* `passwords_needed_to_start`: Password names required to launch the
|
||||||
|
job_template (array, read-only)
|
||||||
|
* `variables_needed_to_start`: Required variable names required to launch the
|
||||||
|
job_template (array, read-only)
|
||||||
|
* `survey_enabled`: Flag indicating whether the job_template has an enabled
|
||||||
|
survey (boolean, read-only)
|
||||||
|
* `inventory_needed_to_start`: Flag indicating the presence of an inventory
|
||||||
|
associated with the job template. If not then one should be supplied when
|
||||||
|
launching the job (boolean, read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to launch the job_template. If any
|
||||||
|
passwords, inventory, or extra variables (extra_vars) are required, they must
|
||||||
|
be passed via POST data, with extra_vars given as a YAML or JSON string and
|
||||||
|
escaped parentheses. If the `inventory_needed_to_start` is `True` then the
|
||||||
|
`inventory` is required.
|
||||||
|
|
||||||
|
If successful, the response status code will be 201. If any required passwords
|
||||||
|
are not provided, a 400 status code will be returned. If the job cannot be
|
||||||
|
launched, a 405 status code will be returned. If the provided credential or
|
||||||
|
inventory are not allowed to be used by the user, then a 403 status code will
|
||||||
|
be returned.
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
{% extends "api/sub_list_create_api_view.md" %}
|
||||||
|
|
||||||
|
{% block post_create %}
|
||||||
|
{% include "api/_schedule_list_common.md" %}
|
||||||
|
{% endblock %}
|
||||||
|
|
@ -0,0 +1,120 @@
|
||||||
|
POST requests to this resource should include the full specification for a {{ model_verbose_name|title }}'s Survey
|
||||||
|
|
||||||
|
Here is an example survey specification:
|
||||||
|
|
||||||
|
{
|
||||||
|
"name": "Simple Survey",
|
||||||
|
"description": "Description of the simple survey",
|
||||||
|
"spec": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"question_name": "example question",
|
||||||
|
"question_description": "What is your favorite color?",
|
||||||
|
"variable": "favorite_color",
|
||||||
|
"required": false,
|
||||||
|
"default": "blue"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
`name` and `description` are required elements at the beginning of the survey specification. `spec` must be a
|
||||||
|
list of survey items.
|
||||||
|
|
||||||
|
Within each survey item `type` must be one of:
|
||||||
|
|
||||||
|
* text: For survey questions expecting a textual answer
|
||||||
|
* password: For survey questions expecting a password or other sensitive information
|
||||||
|
* integer: For survey questions expecting a whole number answer
|
||||||
|
* float: For survey questions expecting a decimal number
|
||||||
|
* multiplechoice: For survey questions where one option from a list is required
|
||||||
|
* multiselect: For survey questions where multiple items from a presented list can be selected
|
||||||
|
|
||||||
|
Each item must contain a `question_name` and `question_description` field that describes the survey question itself.
|
||||||
|
The `variable` elements of each survey items represents the key that will be given to the playbook when the {{model_verbose_name}}
|
||||||
|
is launched. It will contain the value as a result of the survey.
|
||||||
|
|
||||||
|
Here is a more comprehensive example showing the various question types and their acceptable parameters:
|
||||||
|
|
||||||
|
{
|
||||||
|
"name": "Simple",
|
||||||
|
"description": "Description",
|
||||||
|
"spec": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"question_name": "cantbeshort",
|
||||||
|
"question_description": "What is a long answer",
|
||||||
|
"variable": "long_answer",
|
||||||
|
"choices": "",
|
||||||
|
"min": 5,
|
||||||
|
"max": "",
|
||||||
|
"required": false,
|
||||||
|
"default": "Leeloo Minai Lekarariba-Laminai-Tchai Ekbat De Sebat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"question_name": "cantbelong",
|
||||||
|
"question_description": "What is a short answer",
|
||||||
|
"variable": "short_answer",
|
||||||
|
"choices": "",
|
||||||
|
"min": "",
|
||||||
|
"max": 7,
|
||||||
|
"required": false,
|
||||||
|
"default": "leeloo"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"question_name": "reqd",
|
||||||
|
"question_description": "I should be required",
|
||||||
|
"variable": "reqd_answer",
|
||||||
|
"choices": "",
|
||||||
|
"min": "",
|
||||||
|
"max": "",
|
||||||
|
"required": true,
|
||||||
|
"default": "NOT OPTIONAL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "multiplechoice",
|
||||||
|
"question_name": "achoice",
|
||||||
|
"question_description": "Need one of these",
|
||||||
|
"variable": "single_choice",
|
||||||
|
"choices": ["one", "two"],
|
||||||
|
"min": "",
|
||||||
|
"max": "",
|
||||||
|
"required": false,
|
||||||
|
"default": "one"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "multiselect",
|
||||||
|
"question_name": "mchoice",
|
||||||
|
"question_description": "Can have multiples of these",
|
||||||
|
"variable": "multi_choice",
|
||||||
|
"choices": ["one", "two", "three"],
|
||||||
|
"min": "",
|
||||||
|
"max": "",
|
||||||
|
"required": false,
|
||||||
|
"default": "one\nthree"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "integer",
|
||||||
|
"question_name": "integerchoice",
|
||||||
|
"question_description": "I need an int here",
|
||||||
|
"variable": "int_answer",
|
||||||
|
"choices": "",
|
||||||
|
"min": 1,
|
||||||
|
"max": 5,
|
||||||
|
"required": false,
|
||||||
|
"default": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "float",
|
||||||
|
"question_name": "float",
|
||||||
|
"question_description": "I need a float here",
|
||||||
|
"variable": "float_answer",
|
||||||
|
"choices": "",
|
||||||
|
"min": 2,
|
||||||
|
"max": 5,
|
||||||
|
"required": false,
|
||||||
|
"default": ""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,8 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# List {{ model_verbose_name_plural|title }}:
|
||||||
|
|
||||||
|
Make a GET request to this resource to retrieve the list of
|
||||||
|
{{ model_verbose_name_plural }}.
|
||||||
|
|
||||||
|
{% include "api/_list_common.md" %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,10 @@
|
||||||
|
{% include "api/list_api_view.md" %}
|
||||||
|
|
||||||
|
# Create {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a POST request to this resource with the following {{ model_verbose_name }}
|
||||||
|
fields to create a new {{ model_verbose_name }}:
|
||||||
|
|
||||||
|
{% with write_only=1 %}
|
||||||
|
{% include "api/_result_fields_common.md" with serializer_fields=serializer_create_fields %}
|
||||||
|
{% endwith %}
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
{% with model_verbose_name="admin user" model_verbose_name_plural="admin users" %}
|
||||||
|
{% include "api/sub_list_create_api_view.md" %}
|
||||||
|
{% endwith %}
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
# Retrieve {{ model_verbose_name|title }} Playbooks:
|
||||||
|
|
||||||
|
Make GET request to this resource to retrieve a list of playbooks available
|
||||||
|
for {{ model_verbose_name|anora }}.
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
{% extends "api/sub_list_create_api_view.md" %}
|
||||||
|
|
||||||
|
{% block post_create %}
|
||||||
|
{% include "api/_schedule_list_common.md" %}
|
||||||
|
{% endblock %}
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# Cancel Project Update
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the project update can be
|
||||||
|
canceled. The response will include the following field:
|
||||||
|
|
||||||
|
* `can_cancel`: Indicates whether this update can be canceled (boolean,
|
||||||
|
read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to cancel a pending or running project
|
||||||
|
update. The response status code will be 202 if successful, or 405 if the
|
||||||
|
update cannot be canceled.
|
||||||
|
|
@ -0,0 +1,10 @@
|
||||||
|
# Update Project
|
||||||
|
|
||||||
|
Make a GET request to this resource to determine if the project can be updated
|
||||||
|
from its SCM source. The response will include the following field:
|
||||||
|
|
||||||
|
* `can_update`: Flag indicating if this project can be updated (boolean,
|
||||||
|
read-only)
|
||||||
|
|
||||||
|
Make a POST request to this resource to update the project. If the project
|
||||||
|
cannot be updated, a 405 status code will be returned.
|
||||||
|
|
@ -0,0 +1,6 @@
|
||||||
|
# Retrieve {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
|
||||||
|
record containing the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
|
@ -0,0 +1,14 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Retrieve {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
|
||||||
|
record containing the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth DELETE %}
|
||||||
|
# Delete {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a DELETE request to this resource to delete this {{ model_verbose_name }}.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Retrieve {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
|
||||||
|
record containing the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PUT PATCH %}
|
||||||
|
# Update {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a PUT or PATCH request to this resource to update this
|
||||||
|
{{ model_verbose_name }}. The following fields may be modified:
|
||||||
|
|
||||||
|
{% with write_only=1 %}
|
||||||
|
{% include "api/_result_fields_common.md" with serializer_fields=serializer_update_fields %}
|
||||||
|
{% endwith %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PUT %}
|
||||||
|
For a PUT request, include **all** fields in the request.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PATCH %}
|
||||||
|
For a PATCH request, include only the fields that are being modified.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
@ -0,0 +1,33 @@
|
||||||
|
{% ifmeth GET %}
|
||||||
|
# Retrieve {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
|
||||||
|
record containing the following fields:
|
||||||
|
|
||||||
|
{% include "api/_result_fields_common.md" %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PUT PATCH %}
|
||||||
|
# Update {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a PUT or PATCH request to this resource to update this
|
||||||
|
{{ model_verbose_name }}. The following fields may be modified:
|
||||||
|
|
||||||
|
{% with write_only=1 %}
|
||||||
|
{% include "api/_result_fields_common.md" with serializer_fields=serializer_update_fields %}
|
||||||
|
{% endwith %}
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PUT %}
|
||||||
|
For a PUT request, include **all** fields in the request.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth PATCH %}
|
||||||
|
For a PATCH request, include only the fields that are being modified.
|
||||||
|
{% endifmeth %}
|
||||||
|
|
||||||
|
{% ifmeth DELETE %}
|
||||||
|
# Delete {{ model_verbose_name|title|anora }}:
|
||||||
|
|
||||||
|
Make a DELETE request to this resource to delete this {{ model_verbose_name }}.
|
||||||
|
{% endifmeth %}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue