Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backend/control_center/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ TELEGRAM_API_KEY=your-telegram-api-key-here
# Set this to the published Docker Hub image you want to use
# Format: username/repository:tag or repository:tag
# Example: myusername/sdn-launch-control-backend:latest
# DOCKER_IMAGE=sdn-launch-control-backend:latest
DOCKER_IMAGE=taurinetech/sdn-launch-control-backend:1.0.0-beta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add blank line at end of file.

Missing blank line at end of file. POSIX convention and most linters expect files to end with a newline character.

Apply this diff:

-DOCKER_IMAGE=taurinetech/sdn-launch-control-backend:1.0.0-beta
+DOCKER_IMAGE=taurinetech/sdn-launch-control-backend:1.0.0-beta
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DOCKER_IMAGE=taurinetech/sdn-launch-control-backend:1.0.0-beta
DOCKER_IMAGE=taurinetech/sdn-launch-control-backend:1.0.0-beta
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 53-53: [EndingBlankLine] No blank line at the end of the file

(EndingBlankLine)

🤖 Prompt for AI Agents
In backend/control_center/.env.example around line 53, the file is missing a
trailing newline (POSIX convention and linters expect a newline at EOF); fix by
adding a single blank line/newline character at the end of the file so the file
ends with a newline.

Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,55 @@
path: "/mnt/config_port_{{ item }}.sh"
state: absent
loop: "{{ interfaces }}"

# Stop and disable the interface monitor timer
- name: Stop and disable interface monitor timer
ansible.builtin.systemd:
name: "interface-monitor.timer"
state: stopped
enabled: no
ignore_errors: yes

# Remove the systemd timer file
- name: Remove systemd timer file for interface monitor
file:
path: "/etc/systemd/system/interface-monitor.timer"
state: absent
ignore_errors: yes

# Remove the systemd service file
- name: Remove systemd service file for interface monitor
file:
path: "/etc/systemd/system/interface-monitor.service"
state: absent
ignore_errors: yes

# Remove logrotate configuration
- name: Remove logrotate configuration for interface monitor
file:
path: "/etc/logrotate.d/interface-monitor"
state: absent
ignore_errors: yes

# Remove the interface monitor directory and all its contents
- name: Remove interface monitor directory and all contents
file:
path: "/mnt/interface-monitor"
state: absent
ignore_errors: yes

# Remove log files
- name: Remove interface monitor log files
file:
path: "{{ item }}"
state: absent
loop:
- "/var/log/interface_monitor.log"
- "/var/log/interface_monitor.log.old"
ignore_errors: yes

# Reload systemd to pick up the removed service and timer
- name: Reload systemd after removing interface monitor service and timer
ansible.builtin.systemd:
daemon_reload: yes
ignore_errors: yes
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,7 @@
# import_tasks: configure_ports.yaml

- name: Add ports to OVS bridge
import_tasks: add_ports.yml
import_tasks: add_ports.yml

- name: Setup interface monitoring
import_tasks: setup_interface_monitoring.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
---
- name: Create interface monitor directory
file:
path: "/mnt/interface-monitor"
state: directory
mode: "0755"

- name: Create log file with correct ownership and permissions
file:
path: "/var/log/interface_monitor.log"
state: touch
owner: root
group: root
mode: "0644"

- name: Create interface monitoring script
copy:
dest: "/mnt/interface-monitor/monitor_interfaces.sh"
content: |
#!/bin/bash
# Network Interface Monitor Script
# Monitors USB-to-Ethernet adapters and brings them up if they're down
# Interfaces: {{ interfaces | join(', ') }}

# Configuration
INTERFACES=({{ interfaces | map('quote') | join(' ') }})
Comment on lines +16 to +26
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify the interfaces Jinja variable is available in playbook context.

Line 26 uses {{ interfaces | map('quote') | join(' ') }} to populate the INTERFACES bash array. This variable must be provided by the calling playbook or role defaults. If not defined, the array will be malformed.

Confirm that:

  1. The interfaces variable is passed to this task (e.g., as a role variable or task var)
  2. The variable is properly documented in role defaults or README
  3. The map('quote') filter correctly escapes interface names with special characters

To verify this, check if the interfaces variable is defined by searching the playbook context:


🏁 Script executed:

#!/bin/bash
# Search for where `interfaces` variable is defined or passed in Ansible files

# Look for variable definitions in defaults, vars, or task invocations
find backend/control_center/ansible -name "*.yml" -o -name "*.yaml" | xargs rg -n "interfaces:" -A 2 -B 2

Length of output: 0


🏁 Script executed:

#!/bin/bash

# 1. Verify the monitoring task file exists
echo "=== Checking if monitoring task file exists ==="
ls -la backend/control_center/ansible/roles/ovs-port-setup/tasks/setup_interface_monitoring.yml 2>&1

# 2. Search for "interfaces" more broadly (without colon requirement)
echo -e "\n=== Searching for 'interfaces' pattern in Ansible files ==="
find backend/control_center/ansible -type f \( -name "*.yml" -o -name "*.yaml" \) -exec rg -l "interfaces" {} \;

# 3. Check role structure for defaults and vars
echo -e "\n=== Role defaults/main.yml content ==="
if [ -f "backend/control_center/ansible/roles/ovs-port-setup/defaults/main.yml" ]; then
  cat backend/control_center/ansible/roles/ovs-port-setup/defaults/main.yml
else
  echo "File not found"
fi

echo -e "\n=== Role vars/main.yml content ==="
if [ -f "backend/control_center/ansible/roles/ovs-port-setup/vars/main.yml" ]; then
  cat backend/control_center/ansible/roles/ovs-port-setup/vars/main.yml
else
  echo "File not found"
fi

# 4. Search for playbooks that might call this role
echo -e "\n=== Searching for playbook files that call ovs-port-setup role ==="
find backend/control_center/ansible -type f -name "*.yml" -o -name "*.yaml" | xargs rg -l "ovs-port-setup" | head -10

Length of output: 1173


🏁 Script executed:

#!/bin/bash

# 1. Read the main playbook to understand how the role is invoked
echo "=== Main playbook: ovs-port-setup.yml ==="
cat backend/control_center/ansible/playbooks/ovs-port-setup.yml

# 2. Check role's tasks/main.yml if it exists
echo -e "\n=== Role tasks/main.yml ==="
if [ -f "backend/control_center/ansible/roles/ovs-port-setup/tasks/main.yml" ]; then
  cat backend/control_center/ansible/roles/ovs-port-setup/tasks/main.yml
else
  echo "File not found"
fi

# 3. Check add_ports.yml to see how it references interfaces
echo -e "\n=== Role add_ports.yml (first 50 lines) ==="
head -50 backend/control_center/ansible/roles/ovs-port-setup/tasks/add_ports.yml

# 4. Check the full setup_interface_monitoring.yml to see context of interfaces usage
echo -e "\n=== Full setup_interface_monitoring.yml ==="
cat backend/control_center/ansible/roles/ovs-port-setup/tasks/setup_interface_monitoring.yml

Length of output: 6571


🏁 Script executed:

#!/bin/bash

# Check for group_vars and host_vars directories
echo "=== Searching for group_vars and host_vars ==="
find backend/control_center/ansible -type d -name "group_vars" -o -type d -name "host_vars"

echo -e "\n=== Content of group_vars directories ==="
find backend/control_center/ansible -path "*group_vars*" -type f | head -20

echo -e "\n=== Content of host_vars directories ==="
find backend/control_center/ansible -path "*host_vars*" -type f | head -20

echo -e "\n=== Searching for interfaces variable definition in all yml/yaml files ==="
rg -n "^\s*interfaces\s*:" backend/control_center/ansible --type yaml -C 3 | head -100

echo -e "\n=== Check inventory files ==="
find backend/control_center/ansible -name "inventory*" -o -name "hosts*" | head -10

Length of output: 238


The interfaces variable is undefined throughout the codebase and must be defined before this role executes.

Verification found that {{ interfaces }} is used in both setup_interface_monitoring.yml (lines 24, 26) and add_ports.yml, but the variable is never defined in:

  • The playbook (ovs-port-setup.yml)
  • Role defaults or vars files (do not exist)
  • Group vars or host vars (do not exist)
  • Any other YAML configuration file

Without this variable, the Jinja template will produce an empty or malformed INTERFACES array, and the monitoring script will be non-functional. The with_items: {{ interfaces }} in add_ports.yml will also fail.

Required action: Define interfaces as a list variable (either in playbook vars, role defaults/main.yml, or pass via command line with -e interfaces=[...]).

🤖 Prompt for AI Agents
In
backend/control_center/ansible/roles/ovs-port-setup/tasks/setup_interface_monitoring.yml
around lines 16 to 26, the Jinja variable {{ interfaces }} is undefined and must
be declared as a list before this role runs; add a list variable named
interfaces (e.g., interfaces: ['ethX', 'ethY']) either in the role's
defaults/main.yml, in the playbook vars of ovs-port-setup.yml, or pass via -e on
the command line so both setup_interface_monitoring.yml and add_ports.yml can
iterate it; ensure it is a YAML list (not a string), contains the correct
interface names, and is available to the role scope.

LOG_FILE="/var/log/interface_monitor.log"
LOCK_FILE="/var/run/interface_monitor.lock"

# Function to log messages with timestamp
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE" 2>/dev/null
}

# Function to check if interface is up
is_interface_up() {
local interface="$1"
# Check if interface exists and is UP
if ip link show "$interface" 2>/dev/null | grep -q "state UP"; then
return 0 # Interface is up
else
return 1 # Interface is down
fi
}

# Function to bring interface up
bring_interface_up() {
local interface="$1"
log_message "Attempting to bring up interface: $interface"

# Try to bring the interface up
if ip link set "$interface" up 2>/dev/null; then
# Wait a moment for the interface to stabilize
sleep 2

# Check if it's actually up now
if is_interface_up "$interface"; then
log_message "SUCCESS: Interface $interface is now UP"
return 0
else
log_message "WARNING: Interface $interface command succeeded but interface still appears down"
return 1
fi
else
log_message "ERROR: Failed to bring up interface $interface"
return 1
fi
}

# Function to check if script is already running
check_lock() {
if [ -f "$LOCK_FILE" ]; then
local pid=$(cat "$LOCK_FILE")
if kill -0 "$pid" 2>/dev/null; then
log_message "WARNING: Monitor script already running (PID: $pid)"
exit 1
else
# Lock file exists but process is dead, remove it
rm -f "$LOCK_FILE"
fi
fi
}

# Function to create lock file
create_lock() {
echo $$ > "$LOCK_FILE"
}

# Function to cleanup lock file
cleanup() {
rm -f "$LOCK_FILE"
exit 0
}

# Set up signal handlers for cleanup
trap cleanup EXIT INT TERM

# Main monitoring function
main() {
# Check for lock file to prevent multiple instances
check_lock
create_lock

log_message "Starting interface monitoring check"

local interfaces_brought_up=0

# Check each interface
for interface in "${INTERFACES[@]}"; do
if is_interface_up "$interface"; then
log_message "Interface $interface is UP - OK"
else
log_message "Interface $interface is DOWN - attempting to bring up"
if bring_interface_up "$interface"; then
((interfaces_brought_up++))
fi
fi
done

# Summary
if [ $interfaces_brought_up -gt 0 ]; then
log_message "SUMMARY: Brought up $interfaces_brought_up interface(s)"
else
log_message "SUMMARY: All interfaces are UP - no action needed"
fi

log_message "Interface monitoring check completed"
}

# Run main function (redirect all output to /dev/null to avoid console output)
main "$@" > /dev/null 2>&1
mode: "0755"

- name: Create systemd service file for interface monitor
copy:
dest: "/etc/systemd/system/interface-monitor.service"
content: |
[Unit]
Description=Network Interface Monitor
Documentation=Monitor USB-to-Ethernet adapters and bring them up if down
After=network.target

[Service]
Type=oneshot
ExecStart=/mnt/interface-monitor/monitor_interfaces.sh
User=root
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
mode: "0644"

- name: Create systemd timer file for interface monitor
copy:
dest: "/etc/systemd/system/interface-monitor.timer"
content: |
[Unit]
Description=Network Interface Monitor Timer
Requires=interface-monitor.service

[Timer]
OnBootSec=30s
OnUnitActiveSec=30s
Persistent=true

[Install]
WantedBy=timers.target
mode: "0644"

- name: Create logrotate configuration for interface monitor
copy:
dest: "/etc/logrotate.d/interface-monitor"
content: |
/var/log/interface_monitor.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 644 root root
maxsize 3M
}
mode: "0644"

- name: Reload systemd to pick up new service and timer files
ansible.builtin.systemd:
daemon_reload: yes

- name: Enable and start interface monitor timer
ansible.builtin.systemd:
name: "interface-monitor.timer"
enabled: yes
state: started
2 changes: 1 addition & 1 deletion backend/control_center/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ services:
celery_beat:
container_name: launch-control-celery-beat
# Use same published image as django service
image: ${DOCKER_IMAGE:-sdn-launch-control-backend:latest}
image: ${DOCKER_IMAGE:-sdn-launch-control-backend:1.0.0-beta}
restart: unless-stopped
env_file: .env
command: celery -A control_center beat -l ERROR --scheduler django_celery_beat.schedulers:DatabaseScheduler
Expand Down
23 changes: 23 additions & 0 deletions backend/control_center/general/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -688,7 +688,11 @@ def sync(self, request, pk=None):
port = self.get_object()
device = port.device

logger.debug(f"[PORT_SYNC] Starting sync for port {port.name} (ID: {port.id}) on device {device.name} (IP: {device.lan_ip_address})")
logger.debug(f"[PORT_SYNC] Current port state - is_up: {port.is_up}, link_speed: {port.link_speed}")

if device.device_type != 'switch':
logger.warning(f"[PORT_SYNC] Sync attempted on non-switch device {device.name}")
return Response(
{"error": "Port sync is only available for switch devices"},
status=status.HTTP_400_BAD_REQUEST
Expand All @@ -705,6 +709,8 @@ def sync(self, request, pk=None):
'ip_address': device.lan_ip_address,
}

logger.debug(f"[PORT_SYNC] Running playbook '{playbook_name}' with extra_vars: {extra_vars}")

result = run_playbook_with_extravars(
playbook_name,
playbook_dir_path,
Expand All @@ -713,30 +719,47 @@ def sync(self, request, pk=None):
quiet=True
)

logger.debug(f"[PORT_SYNC] Playbook result status: {result.get('status')}")

if result.get('status') != 'success':
error_msg = result.get('error', 'Unknown error occurred')
logger.error(f"[PORT_SYNC] Failed to sync port {port.name}: {error_msg}")
logger.error(f"Failed to sync port {port.name}: {error_msg}")
return Response(
{"error": f"Failed to sync port details: {error_msg}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)

# Parse results
logger.debug(f"[PORT_SYNC] Parsing results for port {port.name}")
port_speed = get_single_port_speed_from_results(result, port.name)
port_status = get_port_status_from_results(result, port.name)

logger.debug(f"[PORT_SYNC] Parsed results for port {port.name}:")
logger.debug(f"[PORT_SYNC] - port_speed: {port_speed}")
logger.debug(f"[PORT_SYNC] - port_status: {port_status}")
logger.debug(f"[PORT_SYNC] - Current DB is_up: {port.is_up}")
logger.debug(f"[PORT_SYNC] - Current DB link_speed: {port.link_speed}")

# Update port in database
update_fields = []
if port_speed is not None:
old_speed = port.link_speed
port.link_speed = port_speed
update_fields.append('link_speed')
logger.debug(f"[PORT_SYNC] Updating link_speed: {old_speed} -> {port_speed}")
if port_status is not None:
old_status = port.is_up
port.is_up = port_status
update_fields.append('is_up')
logger.debug(f"[PORT_SYNC] Updating is_up: {old_status} -> {port_status}")

if update_fields:
port.save(update_fields=update_fields)
logger.debug(f"[PORT_SYNC] Successfully updated port {port.name} with fields: {update_fields}")
logger.debug(f"Updated port {port.name}: speed={port_speed}, status={port_status}")
else:
logger.debug(f"[PORT_SYNC] No fields to update for port {port.name}")

# Return updated port data
serializer = PortSerializer(port)
Expand Down
Loading
Loading