Compare commits
1 Commits
main
...
ba43d22a1a
| Author | SHA1 | Date | |
|---|---|---|---|
| ba43d22a1a |
422
SCHEDULER_README.md
Normal file
422
SCHEDULER_README.md
Normal file
@ -0,0 +1,422 @@
|
||||
# Scheduler Module
|
||||
|
||||
The scheduler module provides flexible time-based automation for the plant watering system, with NVS persistence and full MQTT control.
|
||||
|
||||
## Features
|
||||
|
||||
- **Multiple Schedule Types**:
|
||||
- Interval-based (every X minutes)
|
||||
- Time of day (daily at specific time)
|
||||
- Day-specific (specific days at specific time)
|
||||
- **Per-Pump Scheduling**: Up to 4 independent schedules per pump
|
||||
- **NTP Time Synchronization**: Automatic internet time sync
|
||||
- **Holiday Mode**: Pause all schedules without deleting them
|
||||
- **MQTT Configuration**: Full remote control and monitoring
|
||||
- **NVS Persistence**: Schedules survive power cycles
|
||||
- **Manual Override**: Test schedules without waiting
|
||||
- **No External Dependencies**: Built-in JSON handling
|
||||
|
||||
## Schedule Configuration
|
||||
|
||||
### Schedule Types
|
||||
|
||||
#### 1. Interval Schedule
|
||||
Waters every X minutes from the last run time.
|
||||
```json
|
||||
{
|
||||
"type": "interval",
|
||||
"enabled": true,
|
||||
"interval_minutes": 120,
|
||||
"duration_ms": 15000,
|
||||
"speed_percent": 70
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. Time of Day Schedule
|
||||
Waters daily at a specific time.
|
||||
```json
|
||||
{
|
||||
"type": "time_of_day",
|
||||
"enabled": true,
|
||||
"hour": 6,
|
||||
"minute": 30,
|
||||
"duration_ms": 20000,
|
||||
"speed_percent": 80
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Days and Time Schedule
|
||||
Waters on specific days at a specific time.
|
||||
```json
|
||||
{
|
||||
"type": "days_time",
|
||||
"enabled": true,
|
||||
"hour": 18,
|
||||
"minute": 0,
|
||||
"days_mask": 42,
|
||||
"duration_ms": 25000,
|
||||
"speed_percent": 75
|
||||
}
|
||||
```
|
||||
|
||||
### Days Mask Values
|
||||
- Sunday: 1 (bit 0)
|
||||
- Monday: 2 (bit 1)
|
||||
- Tuesday: 4 (bit 2)
|
||||
- Wednesday: 8 (bit 3)
|
||||
- Thursday: 16 (bit 4)
|
||||
- Friday: 32 (bit 5)
|
||||
- Saturday: 64 (bit 6)
|
||||
|
||||
Common masks:
|
||||
- Daily: 127 (all days)
|
||||
- Weekdays: 62 (Mon-Fri)
|
||||
- Weekends: 65 (Sat-Sun)
|
||||
- Mon/Wed/Fri: 42
|
||||
|
||||
## MQTT Topics
|
||||
|
||||
### Schedule Configuration
|
||||
Configure individual schedules for each pump.
|
||||
|
||||
**Topic**: `plant_watering/schedule/[pump_id]/[schedule_id]/config`
|
||||
- pump_id: 1 or 2
|
||||
- schedule_id: 0-3
|
||||
|
||||
**Example**: Configure pump 1, schedule 0 for daily 6:30 AM watering
|
||||
```bash
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/0/config" -m '{
|
||||
"type": "time_of_day",
|
||||
"enabled": true,
|
||||
"hour": 6,
|
||||
"minute": 30,
|
||||
"duration_ms": 20000,
|
||||
"speed_percent": 80
|
||||
}'
|
||||
```
|
||||
|
||||
### View All Schedules
|
||||
Get all configured schedules on demand.
|
||||
|
||||
**Topic**: `plant_watering/commands/get_schedules`
|
||||
**Payload**: Any value (e.g., "1")
|
||||
|
||||
```bash
|
||||
# Request all schedules
|
||||
mosquitto_pub -h <broker> -t "plant_watering/commands/get_schedules" -m "1"
|
||||
|
||||
# Monitor the responses
|
||||
mosquitto_sub -h <broker> -t "plant_watering/schedule/+/+/current" -t "plant_watering/schedule/summary" -v
|
||||
```
|
||||
|
||||
**Response Topics**:
|
||||
- `plant_watering/schedule/[pump_id]/[schedule_id]/current` - Each configured schedule
|
||||
- `plant_watering/schedule/summary` - Summary of all schedules
|
||||
|
||||
Summary format:
|
||||
```json
|
||||
{
|
||||
"total_schedules": 4,
|
||||
"active_schedules": 3,
|
||||
"holiday_mode": false,
|
||||
"time_sync": true
|
||||
}
|
||||
```
|
||||
|
||||
### View Pump Schedules
|
||||
Get schedules for a specific pump.
|
||||
|
||||
**Topic**: `plant_watering/schedule/[pump_id]/get`
|
||||
**Payload**: Any value
|
||||
|
||||
```bash
|
||||
# Get all schedules for pump 1
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/get" -m "1"
|
||||
|
||||
# Get all schedules for pump 2
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/2/get" -m "1"
|
||||
```
|
||||
|
||||
### Schedule Status
|
||||
The system publishes schedule status after configuration and periodically.
|
||||
|
||||
**Topic**: `plant_watering/schedule/[pump_id]/[schedule_id]/status`
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"pump_id": 1,
|
||||
"schedule_id": 0,
|
||||
"type": "time_of_day",
|
||||
"enabled": true,
|
||||
"hour": 6,
|
||||
"minute": 30,
|
||||
"duration_ms": 20000,
|
||||
"speed_percent": 80,
|
||||
"next_run": 1706522400,
|
||||
"next_run_str": "2024-01-29 06:30:00",
|
||||
"last_run": 1706436000
|
||||
}
|
||||
```
|
||||
|
||||
### Global Scheduler Status
|
||||
Published every minute when connected.
|
||||
|
||||
**Topic**: `plant_watering/schedule/status`
|
||||
|
||||
**Format**:
|
||||
```json
|
||||
{
|
||||
"holiday_mode": false,
|
||||
"time_sync": true,
|
||||
"active_schedules": 3,
|
||||
"time": 1706436000
|
||||
}
|
||||
```
|
||||
|
||||
### Current System Time
|
||||
Check or monitor the device's current time.
|
||||
|
||||
**Get Time (On Demand)**
|
||||
- **Topic**: `plant_watering/commands/get_time`
|
||||
- **Payload**: Any value (e.g., "1")
|
||||
- **Response Topic**: `plant_watering/system/time`
|
||||
|
||||
```bash
|
||||
# Request current time
|
||||
mosquitto_pub -h <broker> -t "plant_watering/commands/get_time" -m "1"
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"timestamp": 1706436000,
|
||||
"datetime": "2024-01-28 14:32:45 MST",
|
||||
"timezone": "MST7MDT,M3.2.0,M11.1.0",
|
||||
"synced": true
|
||||
}
|
||||
```
|
||||
|
||||
**Periodic Time (Every Minute)**
|
||||
- **Topic**: `plant_watering/system/current_time`
|
||||
|
||||
Format:
|
||||
```json
|
||||
{
|
||||
"timestamp": 1706436000,
|
||||
"datetime": "2024-01-28 14:32:00 MST",
|
||||
"day_of_week": 0,
|
||||
"hour": 14,
|
||||
"minute": 32
|
||||
}
|
||||
```
|
||||
|
||||
### Holiday Mode
|
||||
Pause all schedules without deleting them.
|
||||
|
||||
**Topic**: `plant_watering/commands/holiday_mode`
|
||||
**Payload**: `on` or `off`
|
||||
|
||||
```bash
|
||||
# Enable holiday mode
|
||||
mosquitto_pub -h <broker> -t "plant_watering/commands/holiday_mode" -m "on"
|
||||
|
||||
# Disable holiday mode
|
||||
mosquitto_pub -h <broker> -t "plant_watering/commands/holiday_mode" -m "off"
|
||||
```
|
||||
|
||||
### Manual Time Setting
|
||||
If NTP is unavailable, set time manually.
|
||||
|
||||
**Topic**: `plant_watering/schedule/time/set`
|
||||
**Payload**: Unix timestamp (as string)
|
||||
|
||||
```bash
|
||||
# Set current time
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/time/set" -m "$(date +%s)"
|
||||
```
|
||||
|
||||
### Manual Schedule Trigger
|
||||
Test schedules without waiting.
|
||||
|
||||
**Topic**: `plant_watering/schedule/[pump_id]/trigger`
|
||||
|
||||
```bash
|
||||
# Trigger all enabled schedules for pump 1
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/trigger" -m "1"
|
||||
```
|
||||
|
||||
### Schedule Execution Notification
|
||||
Published when a schedule executes.
|
||||
|
||||
**Topic**: `plant_watering/schedule/[pump_id]/executed`
|
||||
|
||||
**Format**:
|
||||
```json
|
||||
{
|
||||
"schedule_id": 0,
|
||||
"duration_ms": 20000,
|
||||
"speed": 80
|
||||
}
|
||||
```
|
||||
|
||||
## Example Configurations
|
||||
|
||||
### Example 1: Morning and Evening Watering
|
||||
```bash
|
||||
# Morning watering at 6:30 AM
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/0/config" -m '{
|
||||
"type": "time_of_day",
|
||||
"enabled": true,
|
||||
"hour": 6,
|
||||
"minute": 30,
|
||||
"duration_ms": 15000,
|
||||
"speed_percent": 70
|
||||
}'
|
||||
|
||||
# Evening watering at 6:30 PM
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/1/config" -m '{
|
||||
"type": "time_of_day",
|
||||
"enabled": true,
|
||||
"hour": 18,
|
||||
"minute": 30,
|
||||
"duration_ms": 15000,
|
||||
"speed_percent": 70
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 2: Every 2 Hours During Day
|
||||
```bash
|
||||
# Interval watering every 2 hours
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/2/0/config" -m '{
|
||||
"type": "interval",
|
||||
"enabled": true,
|
||||
"interval_minutes": 120,
|
||||
"duration_ms": 10000,
|
||||
"speed_percent": 60
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 3: Weekday Morning Watering
|
||||
```bash
|
||||
# Water Monday-Friday at 7:00 AM
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/2/config" -m '{
|
||||
"type": "days_time",
|
||||
"enabled": true,
|
||||
"hour": 7,
|
||||
"minute": 0,
|
||||
"days_mask": 62,
|
||||
"duration_ms": 20000,
|
||||
"speed_percent": 80
|
||||
}'
|
||||
```
|
||||
|
||||
### Example 4: Different Weekend Schedule
|
||||
```bash
|
||||
# Weekend watering at 9:00 AM with longer duration
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/3/config" -m '{
|
||||
"type": "days_time",
|
||||
"enabled": true,
|
||||
"hour": 9,
|
||||
"minute": 0,
|
||||
"days_mask": 65,
|
||||
"duration_ms": 30000,
|
||||
"speed_percent": 75
|
||||
}'
|
||||
```
|
||||
|
||||
## Disable/Enable Schedules
|
||||
|
||||
To disable a schedule without deleting it:
|
||||
```bash
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/0/config" -m '{
|
||||
"type": "time_of_day",
|
||||
"enabled": false,
|
||||
"hour": 6,
|
||||
"minute": 30,
|
||||
"duration_ms": 20000,
|
||||
"speed_percent": 80
|
||||
}'
|
||||
```
|
||||
|
||||
To completely remove a schedule:
|
||||
```bash
|
||||
mosquitto_pub -h <broker> -t "plant_watering/schedule/1/0/config" -m '{
|
||||
"type": "disabled"
|
||||
}'
|
||||
```
|
||||
|
||||
## Time Zone Configuration
|
||||
|
||||
The scheduler uses Mountain Time (MST/MDT) by default. To change:
|
||||
|
||||
1. Edit `scheduler.c` in the `scheduler_init()` function:
|
||||
```c
|
||||
// Set timezone (adjust as needed)
|
||||
setenv("TZ", "PST8PDT,M3.2.0,M11.1.0", 1); // Pacific Time
|
||||
setenv("TZ", "EST5EDT,M3.2.0,M11.1.0", 1); // Eastern Time
|
||||
setenv("TZ", "CST6CDT,M3.2.0,M11.1.0", 1); // Central Time
|
||||
setenv("TZ", "GMT0BST,M3.5.0,M10.5.0", 1); // UK Time
|
||||
```
|
||||
|
||||
## Serial Monitor Output
|
||||
|
||||
The system status is printed to serial every 30 seconds, including:
|
||||
```
|
||||
I (xxxxx) MAIN: Scheduler: 2 active, Holiday: OFF, DateTime: 2024-01-28 14:32:45
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Time Not Synchronized
|
||||
- Check internet connection
|
||||
- Verify NTP servers are accessible
|
||||
- Manual set time as workaround
|
||||
- Check serial output for sync status
|
||||
|
||||
### Schedule Not Executing
|
||||
1. Check if time is synchronized
|
||||
2. Verify schedule is enabled
|
||||
3. Check holiday mode is off
|
||||
4. Verify schedule configuration is valid
|
||||
5. Check pump isn't in cooldown period
|
||||
6. Monitor execution notifications on MQTT
|
||||
|
||||
### Schedule Executes at Wrong Time
|
||||
- Verify timezone setting
|
||||
- Check system time is correct
|
||||
- Remember schedules won't run twice within 60 seconds
|
||||
- Use `get_time` command to verify device time
|
||||
|
||||
## Integration with Automation
|
||||
|
||||
The scheduler can work alongside moisture-based automation:
|
||||
- Schedules provide baseline watering
|
||||
- Moisture sensors can trigger additional watering
|
||||
- Both respect motor safety limits (max runtime, cooldown)
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Test Schedules**: Use manual trigger to test before relying on schedule
|
||||
2. **Start Simple**: Begin with one schedule and add more as needed
|
||||
3. **Monitor Execution**: Watch MQTT topics to confirm schedules work
|
||||
4. **Use Holiday Mode**: Don't delete schedules when going away
|
||||
5. **Stagger Schedules**: If using multiple pumps, offset times to reduce load
|
||||
6. **Monitor Time Sync**: Ensure device maintains correct time
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- **No External Dependencies**: The scheduler uses built-in JSON parsing instead of cJSON library
|
||||
- **Time Check Interval**: Schedules are checked every 30 seconds
|
||||
- **Execution Window**: Schedules execute within 60 seconds of target time
|
||||
- **NTP Servers**: Uses pool.ntp.org, time.nist.gov, and time.google.com
|
||||
- **Persistence**: Schedule configurations saved to NVS, runtime info is not persisted
|
||||
|
||||
## Limitations
|
||||
|
||||
- Maximum 4 schedules per pump (8 total)
|
||||
- Minimum resolution is 1 minute
|
||||
- Schedules check every 30 seconds (may be up to 30s late)
|
||||
- All times are in configured timezone
|
||||
- Requires accurate system time (NTP or manual)
|
||||
- Simple JSON parser has basic error handling
|
||||
@ -6,6 +6,7 @@ idf_component_register(
|
||||
"plant_mqtt.c"
|
||||
"led_strip.c"
|
||||
"motor_control.c"
|
||||
"scheduler.c"
|
||||
INCLUDE_DIRS
|
||||
"."
|
||||
REQUIRES
|
||||
|
||||
297
main/main.c
297
main/main.c
@ -10,6 +10,7 @@
|
||||
#include "ota_server.h"
|
||||
#include "plant_mqtt.h"
|
||||
#include "motor_control.h"
|
||||
#include "scheduler.h"
|
||||
#include "sdkconfig.h"
|
||||
|
||||
// Uncomment this line to enable motor test mode with shorter intervals
|
||||
@ -18,12 +19,15 @@
|
||||
static const char *TAG = "MAIN";
|
||||
|
||||
// Application version
|
||||
#define APP_VERSION "2.1.0-motor"
|
||||
#define APP_VERSION "2.2.0-scheduler"
|
||||
|
||||
// Test data
|
||||
static int test_moisture_1 = 45;
|
||||
static int test_moisture_2 = 62;
|
||||
|
||||
// Function prototypes
|
||||
static void print_chip_info(void);
|
||||
|
||||
// Motor Control Callbacks
|
||||
static void motor_state_change_callback(motor_id_t id, motor_state_t state)
|
||||
{
|
||||
@ -63,6 +67,40 @@ static void motor_error_callback(motor_id_t id, const char* error)
|
||||
}
|
||||
}
|
||||
|
||||
// Scheduler callback
|
||||
static void scheduler_trigger_callback(uint8_t pump_id, uint8_t schedule_id,
|
||||
uint32_t duration_ms, uint8_t speed_percent)
|
||||
{
|
||||
ESP_LOGI(TAG, "Schedule %d triggered for pump %d: %lu ms at %d%%",
|
||||
schedule_id, pump_id, duration_ms, speed_percent);
|
||||
|
||||
// Start the pump with the scheduled parameters
|
||||
esp_err_t ret = motor_start_timed(pump_id, speed_percent, duration_ms);
|
||||
if (ret != ESP_OK) {
|
||||
ESP_LOGE(TAG, "Failed to start pump %d for schedule %d: %s",
|
||||
pump_id, schedule_id, esp_err_to_name(ret));
|
||||
|
||||
// Publish error to MQTT
|
||||
if (mqtt_client_is_connected()) {
|
||||
char topic[64];
|
||||
char msg[128];
|
||||
snprintf(topic, sizeof(topic), "plant_watering/alerts/schedule_error/%d", pump_id);
|
||||
snprintf(msg, sizeof(msg), "Schedule %d failed: %s", schedule_id, esp_err_to_name(ret));
|
||||
mqtt_client_publish(topic, msg, MQTT_QOS_1, MQTT_NO_RETAIN);
|
||||
}
|
||||
} else {
|
||||
// Publish schedule execution to MQTT
|
||||
if (mqtt_client_is_connected()) {
|
||||
char topic[64];
|
||||
char msg[128];
|
||||
snprintf(topic, sizeof(topic), "plant_watering/schedule/%d/executed", pump_id);
|
||||
snprintf(msg, sizeof(msg), "{\"schedule_id\":%d,\"duration_ms\":%lu,\"speed\":%d}",
|
||||
schedule_id, duration_ms, speed_percent);
|
||||
mqtt_client_publish(topic, msg, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// MQTT Callbacks
|
||||
static void mqtt_connected_callback(void)
|
||||
{
|
||||
@ -80,6 +118,13 @@ static void mqtt_connected_callback(void)
|
||||
"plant_watering/commands/test_pump/+",
|
||||
"plant_watering/commands/emergency_stop",
|
||||
"plant_watering/commands/test_mode",
|
||||
"plant_watering/commands/holiday_mode",
|
||||
"plant_watering/commands/get_time",
|
||||
"plant_watering/commands/get_schedules",
|
||||
"plant_watering/schedule/+/+/config",
|
||||
"plant_watering/schedule/+/trigger",
|
||||
"plant_watering/schedule/+/get",
|
||||
"plant_watering/schedule/time/set",
|
||||
"plant_watering/settings/+/+",
|
||||
NULL
|
||||
};
|
||||
@ -196,13 +241,148 @@ static void mqtt_data_callback(const char* topic, const char* data, int data_len
|
||||
motor_set_max_runtime(MOTOR_PUMP_1, CONFIG_WATERING_MAX_DURATION_MS);
|
||||
motor_set_max_runtime(MOTOR_PUMP_2, CONFIG_WATERING_MAX_DURATION_MS);
|
||||
}
|
||||
} else if (strncmp(topic, "plant_watering/settings/pump/", 29) == 0) {
|
||||
// Parse settings commands like:
|
||||
// plant_watering/settings/pump/1/max_runtime
|
||||
// plant_watering/settings/pump/1/min_interval
|
||||
// plant_watering/settings/pump/1/min_speed
|
||||
// plant_watering/settings/pump/1/max_speed
|
||||
} else if (strcmp(topic, "plant_watering/commands/holiday_mode") == 0) {
|
||||
if (strncmp(data, "on", data_len) == 0) {
|
||||
scheduler_set_holiday_mode(true);
|
||||
ESP_LOGI(TAG, "Holiday mode enabled - all schedules paused");
|
||||
} else if (strncmp(data, "off", data_len) == 0) {
|
||||
scheduler_set_holiday_mode(false);
|
||||
ESP_LOGI(TAG, "Holiday mode disabled - schedules resumed");
|
||||
}
|
||||
} else if (strcmp(topic, "plant_watering/commands/get_time") == 0) {
|
||||
// Publish current time information
|
||||
if (scheduler_is_time_synchronized()) {
|
||||
time_t now = scheduler_get_current_time();
|
||||
struct tm timeinfo;
|
||||
localtime_r(&now, &timeinfo);
|
||||
|
||||
char time_str[64];
|
||||
strftime(time_str, sizeof(time_str), "%Y-%m-%d %H:%M:%S %Z", &timeinfo);
|
||||
|
||||
char response[256];
|
||||
snprintf(response, sizeof(response),
|
||||
"{\"timestamp\":%lld,\"datetime\":\"%s\",\"timezone\":\"%s\",\"synced\":true}",
|
||||
(long long)now, time_str, getenv("TZ") ? getenv("TZ") : "UTC");
|
||||
|
||||
mqtt_client_publish("plant_watering/system/time", response, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
ESP_LOGI(TAG, "Time: %s", time_str);
|
||||
} else {
|
||||
mqtt_client_publish("plant_watering/system/time",
|
||||
"{\"synced\":false,\"message\":\"Time not synchronized\"}",
|
||||
MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
ESP_LOGW(TAG, "Time not synchronized");
|
||||
}
|
||||
} else if (strcmp(topic, "plant_watering/commands/get_schedules") == 0) {
|
||||
// Publish all configured schedules
|
||||
ESP_LOGI(TAG, "Publishing all schedules");
|
||||
|
||||
int active_count = 0;
|
||||
|
||||
// Publish each configured schedule
|
||||
for (int pump = 1; pump <= 2; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
schedule_config_t config;
|
||||
if (scheduler_get_schedule(pump, sched, &config) == ESP_OK) {
|
||||
// Only publish if schedule is configured (not disabled)
|
||||
if (config.type != SCHEDULE_TYPE_DISABLED) {
|
||||
char topic_buf[64];
|
||||
char json[512];
|
||||
snprintf(topic_buf, sizeof(topic_buf),
|
||||
"plant_watering/schedule/%d/%d/current", pump, sched);
|
||||
|
||||
if (scheduler_schedule_to_json(pump, sched, json, sizeof(json)) == ESP_OK) {
|
||||
mqtt_client_publish(topic_buf, json, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
if (config.enabled) {
|
||||
active_count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Publish summary
|
||||
char summary[256];
|
||||
snprintf(summary, sizeof(summary),
|
||||
"{\"total_schedules\":%d,\"active_schedules\":%d,\"holiday_mode\":%s,\"time_sync\":%s}",
|
||||
active_count,
|
||||
active_count,
|
||||
scheduler_get_holiday_mode() ? "true" : "false",
|
||||
scheduler_is_time_synchronized() ? "true" : "false");
|
||||
mqtt_client_publish("plant_watering/schedule/summary", summary, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
|
||||
ESP_LOGI(TAG, "Published %d schedules", active_count);
|
||||
} else if (strncmp(topic, "plant_watering/schedule/", 24) == 0) {
|
||||
// Parse schedule commands
|
||||
if (strcmp(topic, "plant_watering/schedule/time/set") == 0) {
|
||||
// Set system time manually (useful if no NTP)
|
||||
time_t timestamp = atoll(data); // Use atoll for long long
|
||||
if (timestamp > 0) {
|
||||
scheduler_set_time(timestamp);
|
||||
ESP_LOGI(TAG, "System time set to %lld", (long long)timestamp);
|
||||
}
|
||||
} else {
|
||||
int pump_id = 0;
|
||||
int schedule_id = 0;
|
||||
char action[16] = {0};
|
||||
|
||||
int parsed = sscanf(topic + 24, "%d/%d/%15s", &pump_id, &schedule_id, action);
|
||||
|
||||
if (parsed == 2) {
|
||||
// Check if it's a trigger command
|
||||
if (sscanf(topic + 24, "%d/trigger", &pump_id) == 1) {
|
||||
if (pump_id >= 1 && pump_id <= 2) {
|
||||
// Trigger all enabled schedules for this pump
|
||||
ESP_LOGI(TAG, "Manual trigger for pump %d schedules", pump_id);
|
||||
for (int i = 0; i < SCHEDULER_MAX_SCHEDULES_PER_PUMP; i++) {
|
||||
scheduler_trigger_schedule(pump_id, i);
|
||||
}
|
||||
}
|
||||
}
|
||||
// Check if it's a get command for specific pump
|
||||
else if (sscanf(topic + 24, "%d/get", &pump_id) == 1) {
|
||||
if (pump_id >= 1 && pump_id <= 2) {
|
||||
ESP_LOGI(TAG, "Getting schedules for pump %d", pump_id);
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
schedule_config_t config;
|
||||
if (scheduler_get_schedule(pump_id, sched, &config) == ESP_OK &&
|
||||
config.type != SCHEDULE_TYPE_DISABLED) {
|
||||
char topic_buf[64];
|
||||
char json[512];
|
||||
snprintf(topic_buf, sizeof(topic_buf),
|
||||
"plant_watering/schedule/%d/%d/current", pump_id, sched);
|
||||
|
||||
if (scheduler_schedule_to_json(pump_id, sched, json, sizeof(json)) == ESP_OK) {
|
||||
mqtt_client_publish(topic_buf, json, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (parsed == 3 && strcmp(action, "config") == 0) {
|
||||
// Configure schedule
|
||||
if (pump_id >= 1 && pump_id <= 2 &&
|
||||
schedule_id >= 0 && schedule_id < SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
|
||||
esp_err_t ret = scheduler_json_to_schedule(data, pump_id, schedule_id);
|
||||
if (ret == ESP_OK) {
|
||||
ESP_LOGI(TAG, "Updated schedule %d for pump %d", schedule_id, pump_id);
|
||||
|
||||
// Publish confirmation
|
||||
char response_topic[64];
|
||||
char response[512];
|
||||
snprintf(response_topic, sizeof(response_topic),
|
||||
"plant_watering/schedule/%d/%d/status", pump_id, schedule_id);
|
||||
scheduler_schedule_to_json(pump_id, schedule_id, response, sizeof(response));
|
||||
mqtt_client_publish(response_topic, response, MQTT_QOS_0, MQTT_RETAIN);
|
||||
} else {
|
||||
ESP_LOGE(TAG, "Failed to update schedule: %s", esp_err_to_name(ret));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (strncmp(topic, "plant_watering/settings/pump/", 29) == 0) {
|
||||
// Parse settings commands
|
||||
int pump_id = 0;
|
||||
char setting[32] = {0};
|
||||
|
||||
@ -218,10 +398,7 @@ static void mqtt_data_callback(const char* topic, const char* data, int data_len
|
||||
motor_set_min_interval(pump_id, value);
|
||||
ESP_LOGI(TAG, "Set pump %d min interval to %d ms", pump_id, value);
|
||||
} else if (strcmp(setting, "min_speed") == 0 && value >= 0 && value <= 100) {
|
||||
// Get current max speed to validate
|
||||
motor_stats_t stats;
|
||||
motor_get_stats(pump_id, &stats);
|
||||
motor_set_speed_limits(pump_id, value, 100); // Assuming max stays at 100
|
||||
motor_set_speed_limits(pump_id, value, 100);
|
||||
ESP_LOGI(TAG, "Set pump %d min speed to %d%%", pump_id, value);
|
||||
} else if (strcmp(setting, "max_speed") == 0 && value > 0 && value <= 100) {
|
||||
motor_set_speed_limits(pump_id, MOTOR_MIN_SPEED, value);
|
||||
@ -321,34 +498,62 @@ static void sensor_simulation_task(void *pvParameters)
|
||||
}
|
||||
}
|
||||
|
||||
// Task to demonstrate automated watering based on moisture
|
||||
static void automation_demo_task(void *pvParameters)
|
||||
// Task to publish schedule status periodically
|
||||
static void schedule_status_task(void *pvParameters)
|
||||
{
|
||||
bool auto_mode = false; // Start with manual mode
|
||||
|
||||
while (1) {
|
||||
if (auto_mode && mqtt_client_is_connected()) {
|
||||
// Simple threshold-based automation demo
|
||||
if (test_moisture_1 < CONFIG_MOISTURE_THRESHOLD_LOW) {
|
||||
if (!motor_is_running(MOTOR_PUMP_1) && !motor_is_cooldown(MOTOR_PUMP_1)) {
|
||||
ESP_LOGI(TAG, "Auto: Moisture 1 low (%d%%), starting pump 1", test_moisture_1);
|
||||
motor_start_timed(MOTOR_PUMP_1, MOTOR_DEFAULT_SPEED, 10000); // 10 second watering
|
||||
}
|
||||
vTaskDelay(pdMS_TO_TICKS(60000)); // Every minute
|
||||
|
||||
if (mqtt_client_is_connected() && scheduler_is_time_synchronized()) {
|
||||
// Publish scheduler status
|
||||
scheduler_status_t status;
|
||||
if (scheduler_get_status(&status) == ESP_OK) {
|
||||
char status_json[256];
|
||||
snprintf(status_json, sizeof(status_json),
|
||||
"{\"holiday_mode\":%s,\"time_sync\":%s,\"active_schedules\":%lu,\"time\":%lld}",
|
||||
status.holiday_mode ? "true" : "false",
|
||||
status.time_synchronized ? "true" : "false",
|
||||
status.active_schedules,
|
||||
(long long)scheduler_get_current_time());
|
||||
mqtt_client_publish("plant_watering/schedule/status", status_json, MQTT_QOS_0, MQTT_RETAIN);
|
||||
}
|
||||
|
||||
if (test_moisture_2 < CONFIG_MOISTURE_THRESHOLD_LOW) {
|
||||
if (!motor_is_running(MOTOR_PUMP_2) && !motor_is_cooldown(MOTOR_PUMP_2)) {
|
||||
ESP_LOGI(TAG, "Auto: Moisture 2 low (%d%%), starting pump 2", test_moisture_2);
|
||||
motor_start_timed(MOTOR_PUMP_2, MOTOR_DEFAULT_SPEED, 10000); // 10 second watering
|
||||
}
|
||||
}
|
||||
}
|
||||
// Publish human-readable time periodically
|
||||
time_t now = scheduler_get_current_time();
|
||||
struct tm timeinfo;
|
||||
localtime_r(&now, &timeinfo);
|
||||
|
||||
vTaskDelay(30000 / portTICK_PERIOD_MS); // Check every 30 seconds
|
||||
char time_str[64];
|
||||
strftime(time_str, sizeof(time_str), "%Y-%m-%d %H:%M:%S %Z", &timeinfo);
|
||||
|
||||
char time_json[256];
|
||||
snprintf(time_json, sizeof(time_json),
|
||||
"{\"timestamp\":%lld,\"datetime\":\"%s\",\"day_of_week\":%d,\"hour\":%d,\"minute\":%d}",
|
||||
(long long)now, time_str, timeinfo.tm_wday, timeinfo.tm_hour, timeinfo.tm_min);
|
||||
mqtt_client_publish("plant_watering/system/current_time", time_json, MQTT_QOS_0, MQTT_NO_RETAIN);
|
||||
|
||||
// Publish all active schedules
|
||||
for (int pump = 1; pump <= 2; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
schedule_config_t config;
|
||||
if (scheduler_get_schedule(pump, sched, &config) == ESP_OK &&
|
||||
config.enabled && config.type != SCHEDULE_TYPE_DISABLED) {
|
||||
|
||||
char topic[64];
|
||||
char json[512];
|
||||
snprintf(topic, sizeof(topic), "plant_watering/schedule/%d/%d/status", pump, sched);
|
||||
|
||||
if (scheduler_schedule_to_json(pump, sched, json, sizeof(json)) == ESP_OK) {
|
||||
mqtt_client_publish(topic, json, MQTT_QOS_0, MQTT_RETAIN);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void print_chip_info(void)
|
||||
static void print_chip_info(void)
|
||||
{
|
||||
esp_chip_info_t chip_info;
|
||||
|
||||
@ -372,6 +577,7 @@ void app_main(void)
|
||||
|
||||
// Print configuration
|
||||
ESP_LOGI(TAG, "Configuration:");
|
||||
ESP_LOGI(TAG, " MQTT Broker: %s", CONFIG_MQTT_BROKER_URL);
|
||||
ESP_LOGI(TAG, " Moisture threshold low: %d%%", CONFIG_MOISTURE_THRESHOLD_LOW);
|
||||
ESP_LOGI(TAG, " Moisture threshold high: %d%%", CONFIG_MOISTURE_THRESHOLD_HIGH);
|
||||
ESP_LOGI(TAG, " Max watering duration: %d ms", CONFIG_WATERING_MAX_DURATION_MS);
|
||||
@ -413,6 +619,10 @@ void app_main(void)
|
||||
motor_set_min_interval(MOTOR_PUMP_2, CONFIG_WATERING_MIN_INTERVAL_MS);
|
||||
#endif
|
||||
|
||||
// Initialize Scheduler
|
||||
ESP_ERROR_CHECK(scheduler_init());
|
||||
scheduler_register_trigger_callback(scheduler_trigger_callback);
|
||||
|
||||
// Start WiFi connection
|
||||
esp_err_t ret = wifi_manager_start();
|
||||
if (ret != ESP_OK) {
|
||||
@ -422,14 +632,15 @@ void app_main(void)
|
||||
// Create sensor simulation task
|
||||
xTaskCreate(sensor_simulation_task, "sensor_sim", 4096, NULL, 5, NULL);
|
||||
|
||||
// Create automation demo task (disabled by default)
|
||||
xTaskCreate(automation_demo_task, "automation", 4096, NULL, 4, NULL);
|
||||
// Create schedule status task
|
||||
xTaskCreate(schedule_status_task, "schedule_status", 4096, NULL, 4, NULL);
|
||||
|
||||
// Main loop - monitor system status
|
||||
while (1) {
|
||||
ESP_LOGI(TAG, "System Status - WiFi: %s, MQTT: %s, Free heap: %d bytes",
|
||||
ESP_LOGI(TAG, "System Status - WiFi: %s, MQTT: %s, Time: %s, Free heap: %d bytes",
|
||||
wifi_manager_is_connected() ? "Connected" : "Disconnected",
|
||||
mqtt_client_is_connected() ? "Connected" : "Disconnected",
|
||||
scheduler_is_time_synchronized() ? "Synced" : "Not synced",
|
||||
esp_get_free_heap_size());
|
||||
|
||||
// Print pump states and runtime
|
||||
@ -448,6 +659,24 @@ void app_main(void)
|
||||
ESP_LOGI(TAG, "Pump %d: %s, Total runtime: %lu s, Runs: %lu",
|
||||
i, state_str, stats.total_runtime_ms / 1000, stats.run_count);
|
||||
}
|
||||
|
||||
// Print scheduler status
|
||||
if (scheduler_is_time_synchronized()) {
|
||||
scheduler_status_t sched_status;
|
||||
if (scheduler_get_status(&sched_status) == ESP_OK) {
|
||||
time_t now = scheduler_get_current_time();
|
||||
struct tm timeinfo;
|
||||
localtime_r(&now, &timeinfo);
|
||||
|
||||
char datetime_str[32];
|
||||
strftime(datetime_str, sizeof(datetime_str), "%Y-%m-%d %H:%M:%S", &timeinfo);
|
||||
|
||||
ESP_LOGI(TAG, "Scheduler: %d active, Holiday: %s, DateTime: %s",
|
||||
sched_status.active_schedules,
|
||||
sched_status.holiday_mode ? "ON" : "OFF",
|
||||
datetime_str);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
vTaskDelay(30000 / portTICK_PERIOD_MS); // Every 30 seconds
|
||||
|
||||
871
main/scheduler.c
Normal file
871
main/scheduler.c
Normal file
@ -0,0 +1,871 @@
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <time.h>
|
||||
#include <sys/time.h>
|
||||
#include "freertos/FreeRTOS.h"
|
||||
#include "freertos/task.h"
|
||||
#include "freertos/semphr.h"
|
||||
#include "esp_log.h"
|
||||
#include "esp_sntp.h"
|
||||
#include "nvs_flash.h"
|
||||
#include "nvs.h"
|
||||
#include "scheduler.h"
|
||||
|
||||
static const char *TAG = "SCHEDULER";
|
||||
|
||||
// NVS namespace
|
||||
#define SCHEDULER_NVS_NAMESPACE "scheduler"
|
||||
|
||||
// Scheduler state
|
||||
typedef struct {
|
||||
bool initialized;
|
||||
bool time_synchronized;
|
||||
time_t last_sync_time;
|
||||
bool holiday_mode;
|
||||
|
||||
// Schedules storage
|
||||
schedule_config_t schedules[SCHEDULER_MAX_PUMPS][SCHEDULER_MAX_SCHEDULES_PER_PUMP];
|
||||
|
||||
// Task handle
|
||||
TaskHandle_t scheduler_task;
|
||||
SemaphoreHandle_t mutex;
|
||||
|
||||
// Callbacks
|
||||
schedule_trigger_callback_t trigger_callback;
|
||||
schedule_status_callback_t status_callback;
|
||||
} scheduler_state_t;
|
||||
|
||||
static scheduler_state_t s_scheduler = {0};
|
||||
|
||||
// Forward declarations
|
||||
static void scheduler_task(void *pvParameters);
|
||||
static esp_err_t save_schedule_to_nvs(uint8_t pump_id, uint8_t schedule_id);
|
||||
static esp_err_t load_schedule_from_nvs(uint8_t pump_id, uint8_t schedule_id);
|
||||
static esp_err_t save_global_settings(void);
|
||||
static esp_err_t load_global_settings(void);
|
||||
static void check_and_execute_schedules(void);
|
||||
static bool should_run_now(const schedule_config_t *config, time_t current_time);
|
||||
|
||||
// NTP sync callback
|
||||
static void time_sync_notification_cb(struct timeval *tv)
|
||||
{
|
||||
ESP_LOGI(TAG, "Time synchronized via NTP");
|
||||
s_scheduler.time_synchronized = true;
|
||||
s_scheduler.last_sync_time = tv->tv_sec;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_init(void)
|
||||
{
|
||||
if (s_scheduler.initialized) {
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
ESP_LOGI(TAG, "Initializing scheduler");
|
||||
|
||||
// Create mutex
|
||||
s_scheduler.mutex = xSemaphoreCreateMutex();
|
||||
if (s_scheduler.mutex == NULL) {
|
||||
ESP_LOGE(TAG, "Failed to create mutex");
|
||||
return ESP_ERR_NO_MEM;
|
||||
}
|
||||
|
||||
// Initialize schedules array
|
||||
memset(s_scheduler.schedules, 0, sizeof(s_scheduler.schedules));
|
||||
|
||||
// Load schedules from NVS
|
||||
for (int pump = 0; pump < SCHEDULER_MAX_PUMPS; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
load_schedule_from_nvs(pump + 1, sched);
|
||||
}
|
||||
}
|
||||
|
||||
// Load global settings
|
||||
load_global_settings();
|
||||
|
||||
// Initialize SNTP for time synchronization
|
||||
ESP_LOGI(TAG, "Initializing SNTP");
|
||||
esp_sntp_setoperatingmode(SNTP_OPMODE_POLL);
|
||||
esp_sntp_setservername(0, "pool.ntp.org");
|
||||
esp_sntp_setservername(1, "time.nist.gov");
|
||||
esp_sntp_setservername(2, "time.google.com");
|
||||
sntp_set_time_sync_notification_cb(time_sync_notification_cb);
|
||||
esp_sntp_init();
|
||||
|
||||
// Set timezone (adjust as needed)
|
||||
setenv("TZ", "MST7MDT,M3.2.0,M11.1.0", 1); // Mountain Time (Denver)
|
||||
tzset();
|
||||
|
||||
// Create scheduler task
|
||||
if (xTaskCreate(scheduler_task, "scheduler", 4096, NULL, 5, &s_scheduler.scheduler_task) != pdPASS) {
|
||||
ESP_LOGE(TAG, "Failed to create scheduler task");
|
||||
vSemaphoreDelete(s_scheduler.mutex);
|
||||
return ESP_ERR_NO_MEM;
|
||||
}
|
||||
|
||||
s_scheduler.initialized = true;
|
||||
ESP_LOGI(TAG, "Scheduler initialized successfully");
|
||||
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_deinit(void)
|
||||
{
|
||||
if (!s_scheduler.initialized) {
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
// Stop SNTP
|
||||
esp_sntp_stop();
|
||||
|
||||
// Delete task
|
||||
if (s_scheduler.scheduler_task) {
|
||||
vTaskDelete(s_scheduler.scheduler_task);
|
||||
}
|
||||
|
||||
// Delete mutex
|
||||
if (s_scheduler.mutex) {
|
||||
vSemaphoreDelete(s_scheduler.mutex);
|
||||
}
|
||||
|
||||
s_scheduler.initialized = false;
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_add_schedule(uint8_t pump_id, uint8_t schedule_id,
|
||||
const schedule_config_t *config)
|
||||
{
|
||||
if (!s_scheduler.initialized || !config) {
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
if (pump_id < 1 || pump_id > SCHEDULER_MAX_PUMPS ||
|
||||
schedule_id >= SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
if (config->type >= SCHEDULE_TYPE_MAX) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
// Copy configuration
|
||||
memcpy(&s_scheduler.schedules[pump_id - 1][schedule_id], config, sizeof(schedule_config_t));
|
||||
|
||||
// Calculate next run time
|
||||
if (config->enabled && s_scheduler.time_synchronized) {
|
||||
time_t now = scheduler_get_current_time();
|
||||
s_scheduler.schedules[pump_id - 1][schedule_id].next_run =
|
||||
scheduler_calculate_next_run(config, now);
|
||||
}
|
||||
|
||||
// Save to NVS
|
||||
esp_err_t ret = save_schedule_to_nvs(pump_id, schedule_id);
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
if (ret == ESP_OK) {
|
||||
ESP_LOGI(TAG, "Added schedule %d for pump %d", schedule_id, pump_id);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_get_schedule(uint8_t pump_id, uint8_t schedule_id,
|
||||
schedule_config_t *config)
|
||||
{
|
||||
if (!s_scheduler.initialized || !config) {
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
if (pump_id < 1 || pump_id > SCHEDULER_MAX_PUMPS ||
|
||||
schedule_id >= SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
memcpy(config, &s_scheduler.schedules[pump_id - 1][schedule_id], sizeof(schedule_config_t));
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_remove_schedule(uint8_t pump_id, uint8_t schedule_id)
|
||||
{
|
||||
if (!s_scheduler.initialized) {
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
if (pump_id < 1 || pump_id > SCHEDULER_MAX_PUMPS ||
|
||||
schedule_id >= SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
// Clear schedule
|
||||
memset(&s_scheduler.schedules[pump_id - 1][schedule_id], 0, sizeof(schedule_config_t));
|
||||
|
||||
// Remove from NVS
|
||||
nvs_handle_t nvs_handle;
|
||||
esp_err_t ret = nvs_open(SCHEDULER_NVS_NAMESPACE, NVS_READWRITE, &nvs_handle);
|
||||
if (ret == ESP_OK) {
|
||||
char key[32];
|
||||
snprintf(key, sizeof(key), "sched_%d_%d", pump_id, schedule_id);
|
||||
nvs_erase_key(nvs_handle, key);
|
||||
nvs_commit(nvs_handle);
|
||||
nvs_close(nvs_handle);
|
||||
}
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
ESP_LOGI(TAG, "Removed schedule %d for pump %d", schedule_id, pump_id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_enable_schedule(uint8_t pump_id, uint8_t schedule_id, bool enable)
|
||||
{
|
||||
if (!s_scheduler.initialized) {
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
if (pump_id < 1 || pump_id > SCHEDULER_MAX_PUMPS ||
|
||||
schedule_id >= SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
s_scheduler.schedules[pump_id - 1][schedule_id].enabled = enable;
|
||||
|
||||
// Recalculate next run time if enabling
|
||||
if (enable && s_scheduler.time_synchronized) {
|
||||
time_t now = scheduler_get_current_time();
|
||||
s_scheduler.schedules[pump_id - 1][schedule_id].next_run =
|
||||
scheduler_calculate_next_run(&s_scheduler.schedules[pump_id - 1][schedule_id], now);
|
||||
}
|
||||
|
||||
esp_err_t ret = save_schedule_to_nvs(pump_id, schedule_id);
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
ESP_LOGI(TAG, "%s schedule %d for pump %d", enable ? "Enabled" : "Disabled",
|
||||
schedule_id, pump_id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_set_time(time_t current_time)
|
||||
{
|
||||
struct timeval tv = {
|
||||
.tv_sec = current_time,
|
||||
.tv_usec = 0
|
||||
};
|
||||
|
||||
settimeofday(&tv, NULL);
|
||||
|
||||
s_scheduler.time_synchronized = true;
|
||||
s_scheduler.last_sync_time = current_time;
|
||||
|
||||
// Recalculate all next run times
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
for (int pump = 0; pump < SCHEDULER_MAX_PUMPS; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
if (s_scheduler.schedules[pump][sched].enabled) {
|
||||
s_scheduler.schedules[pump][sched].next_run =
|
||||
scheduler_calculate_next_run(&s_scheduler.schedules[pump][sched], current_time);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
ESP_LOGI(TAG, "Time set manually to %ld", current_time);
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_sync_time_ntp(void)
|
||||
{
|
||||
if (esp_sntp_get_sync_status() == SNTP_SYNC_STATUS_IN_PROGRESS) {
|
||||
return ESP_ERR_NOT_FINISHED;
|
||||
}
|
||||
|
||||
// Trigger sync
|
||||
esp_sntp_restart();
|
||||
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
bool scheduler_is_time_synchronized(void)
|
||||
{
|
||||
return s_scheduler.time_synchronized;
|
||||
}
|
||||
|
||||
time_t scheduler_get_current_time(void)
|
||||
{
|
||||
time_t now;
|
||||
time(&now);
|
||||
return now;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_set_holiday_mode(bool enabled)
|
||||
{
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
s_scheduler.holiday_mode = enabled;
|
||||
esp_err_t ret = save_global_settings();
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
ESP_LOGI(TAG, "Holiday mode %s", enabled ? "enabled" : "disabled");
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool scheduler_get_holiday_mode(void)
|
||||
{
|
||||
return s_scheduler.holiday_mode;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_get_status(scheduler_status_t *status)
|
||||
{
|
||||
if (!status) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
status->holiday_mode = s_scheduler.holiday_mode;
|
||||
status->time_synchronized = s_scheduler.time_synchronized;
|
||||
status->last_sync_time = s_scheduler.last_sync_time;
|
||||
|
||||
// Count active schedules
|
||||
status->active_schedules = 0;
|
||||
for (int pump = 0; pump < SCHEDULER_MAX_PUMPS; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
if (s_scheduler.schedules[pump][sched].enabled &&
|
||||
s_scheduler.schedules[pump][sched].type != SCHEDULE_TYPE_DISABLED) {
|
||||
status->active_schedules++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
return ESP_OK;
|
||||
}
|
||||
|
||||
time_t scheduler_calculate_next_run(const schedule_config_t *config, time_t from_time)
|
||||
{
|
||||
if (!config || config->type == SCHEDULE_TYPE_DISABLED || !config->enabled) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct tm timeinfo;
|
||||
localtime_r(&from_time, &timeinfo);
|
||||
|
||||
switch (config->type) {
|
||||
case SCHEDULE_TYPE_INTERVAL:
|
||||
// Simple interval from last run or from now
|
||||
if (config->last_run > 0) {
|
||||
return config->last_run + (config->interval_minutes * 60);
|
||||
} else {
|
||||
return from_time + (config->interval_minutes * 60);
|
||||
}
|
||||
|
||||
case SCHEDULE_TYPE_TIME_OF_DAY:
|
||||
{
|
||||
// Daily at specific time
|
||||
struct tm next_time = timeinfo;
|
||||
next_time.tm_hour = config->hour;
|
||||
next_time.tm_min = config->minute;
|
||||
next_time.tm_sec = 0;
|
||||
|
||||
time_t next_run = mktime(&next_time);
|
||||
|
||||
// If time has passed today, schedule for tomorrow
|
||||
if (next_run <= from_time) {
|
||||
next_time.tm_mday++;
|
||||
next_run = mktime(&next_time);
|
||||
}
|
||||
|
||||
return next_run;
|
||||
}
|
||||
|
||||
case SCHEDULE_TYPE_DAYS_TIME:
|
||||
{
|
||||
// Specific days at specific time
|
||||
struct tm next_time = timeinfo;
|
||||
next_time.tm_hour = config->hour;
|
||||
next_time.tm_min = config->minute;
|
||||
next_time.tm_sec = 0;
|
||||
|
||||
// Find next matching day
|
||||
for (int days_ahead = 0; days_ahead < 8; days_ahead++) {
|
||||
struct tm check_time = next_time;
|
||||
check_time.tm_mday += days_ahead;
|
||||
time_t check_timestamp = mktime(&check_time);
|
||||
localtime_r(&check_timestamp, &check_time);
|
||||
|
||||
// Check if this day matches our mask
|
||||
uint8_t day_bit = (1 << check_time.tm_wday);
|
||||
if ((config->days_mask & day_bit) && check_timestamp > from_time) {
|
||||
return check_timestamp;
|
||||
}
|
||||
}
|
||||
|
||||
return 0; // No matching day found (shouldn't happen with valid mask)
|
||||
}
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
// Task that checks and executes schedules
|
||||
static void scheduler_task(void *pvParameters)
|
||||
{
|
||||
ESP_LOGI(TAG, "Scheduler task started");
|
||||
|
||||
while (1) {
|
||||
// Wait 30 seconds between checks
|
||||
vTaskDelay(pdMS_TO_TICKS(30000));
|
||||
|
||||
if (!s_scheduler.time_synchronized) {
|
||||
ESP_LOGD(TAG, "Waiting for time synchronization...");
|
||||
continue;
|
||||
}
|
||||
|
||||
if (s_scheduler.holiday_mode) {
|
||||
ESP_LOGD(TAG, "Holiday mode active, skipping schedules");
|
||||
continue;
|
||||
}
|
||||
|
||||
check_and_execute_schedules();
|
||||
}
|
||||
}
|
||||
|
||||
static void check_and_execute_schedules(void)
|
||||
{
|
||||
time_t now = scheduler_get_current_time();
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
for (int pump = 0; pump < SCHEDULER_MAX_PUMPS; pump++) {
|
||||
for (int sched = 0; sched < SCHEDULER_MAX_SCHEDULES_PER_PUMP; sched++) {
|
||||
schedule_config_t *schedule = &s_scheduler.schedules[pump][sched];
|
||||
|
||||
if (!schedule->enabled || schedule->type == SCHEDULE_TYPE_DISABLED) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if it's time to run
|
||||
if (should_run_now(schedule, now)) {
|
||||
ESP_LOGI(TAG, "Triggering schedule %d for pump %d", sched, pump + 1);
|
||||
|
||||
// Update last run time
|
||||
schedule->last_run = now;
|
||||
|
||||
// Calculate next run time
|
||||
schedule->next_run = scheduler_calculate_next_run(schedule, now);
|
||||
|
||||
// Save updated schedule
|
||||
save_schedule_to_nvs(pump + 1, sched);
|
||||
|
||||
// Call trigger callback
|
||||
if (s_scheduler.trigger_callback) {
|
||||
s_scheduler.trigger_callback(pump + 1, sched,
|
||||
schedule->duration_ms,
|
||||
schedule->speed_percent);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
}
|
||||
|
||||
static bool should_run_now(const schedule_config_t *config, time_t current_time)
|
||||
{
|
||||
if (!config || !config->enabled || config->type == SCHEDULE_TYPE_DISABLED) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Don't run if we've run in the last minute (prevent double triggers)
|
||||
if (config->last_run > 0 && (current_time - config->last_run) < 60) {
|
||||
return false;
|
||||
}
|
||||
|
||||
switch (config->type) {
|
||||
case SCHEDULE_TYPE_INTERVAL:
|
||||
// Check if interval has elapsed
|
||||
if (config->last_run == 0) {
|
||||
// First run
|
||||
return true;
|
||||
}
|
||||
return (current_time - config->last_run) >= (config->interval_minutes * 60);
|
||||
|
||||
case SCHEDULE_TYPE_TIME_OF_DAY:
|
||||
case SCHEDULE_TYPE_DAYS_TIME:
|
||||
// Check if we're within a minute of the scheduled time
|
||||
if (config->next_run > 0 &&
|
||||
current_time >= config->next_run &&
|
||||
current_time < (config->next_run + 60)) {
|
||||
return true;
|
||||
}
|
||||
break;
|
||||
|
||||
case SCHEDULE_TYPE_DISABLED:
|
||||
case SCHEDULE_TYPE_MAX:
|
||||
default:
|
||||
// Should never reach here due to initial check, but needed for compiler
|
||||
break;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
// JSON serialization
|
||||
esp_err_t scheduler_schedule_to_json(uint8_t pump_id, uint8_t schedule_id,
|
||||
char *buffer, size_t buffer_size)
|
||||
{
|
||||
if (!buffer || buffer_size == 0) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
schedule_config_t config;
|
||||
esp_err_t ret = scheduler_get_schedule(pump_id, schedule_id, &config);
|
||||
if (ret != ESP_OK) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
// Build JSON manually without cJSON library
|
||||
int written = 0;
|
||||
written = snprintf(buffer, buffer_size,
|
||||
"{\"pump_id\":%d,\"schedule_id\":%d,\"type\":\"%s\",\"enabled\":%s,",
|
||||
pump_id, schedule_id,
|
||||
scheduler_get_type_string(config.type),
|
||||
config.enabled ? "true" : "false");
|
||||
|
||||
// Add type-specific fields
|
||||
switch (config.type) {
|
||||
case SCHEDULE_TYPE_INTERVAL:
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
"\"interval_minutes\":%lu,", config.interval_minutes);
|
||||
break;
|
||||
|
||||
case SCHEDULE_TYPE_TIME_OF_DAY:
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
"\"hour\":%d,\"minute\":%d,", config.hour, config.minute);
|
||||
break;
|
||||
|
||||
case SCHEDULE_TYPE_DAYS_TIME:
|
||||
{
|
||||
char days_str[64];
|
||||
scheduler_get_days_string(config.days_mask, days_str, sizeof(days_str));
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
"\"hour\":%d,\"minute\":%d,\"days_mask\":%d,\"days\":\"%s\",",
|
||||
config.hour, config.minute, config.days_mask, days_str);
|
||||
break;
|
||||
}
|
||||
|
||||
case SCHEDULE_TYPE_DISABLED:
|
||||
case SCHEDULE_TYPE_MAX:
|
||||
default:
|
||||
// No additional fields for disabled type
|
||||
break;
|
||||
}
|
||||
|
||||
// Add common fields
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
"\"duration_ms\":%lu,\"speed_percent\":%d",
|
||||
config.duration_ms, config.speed_percent);
|
||||
|
||||
// Add runtime info if available
|
||||
if (config.last_run > 0) {
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
",\"last_run\":%lld", (long long)config.last_run);
|
||||
}
|
||||
if (config.next_run > 0) {
|
||||
struct tm timeinfo;
|
||||
localtime_r(&config.next_run, &timeinfo);
|
||||
char time_str[64];
|
||||
strftime(time_str, sizeof(time_str), "%Y-%m-%d %H:%M:%S", &timeinfo);
|
||||
written += snprintf(buffer + written, buffer_size - written,
|
||||
",\"next_run\":%lld,\"next_run_str\":\"%s\"",
|
||||
(long long)config.next_run, time_str);
|
||||
}
|
||||
|
||||
// Close JSON
|
||||
written += snprintf(buffer + written, buffer_size - written, "}");
|
||||
|
||||
return (written < buffer_size) ? ESP_OK : ESP_ERR_INVALID_SIZE;
|
||||
}
|
||||
|
||||
esp_err_t scheduler_json_to_schedule(const char *json, uint8_t pump_id, uint8_t schedule_id)
|
||||
{
|
||||
if (!json) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
schedule_config_t config = {0};
|
||||
|
||||
// Simple JSON parsing without cJSON
|
||||
// Look for key patterns in the JSON string
|
||||
const char *p;
|
||||
|
||||
// Parse type
|
||||
p = strstr(json, "\"type\":");
|
||||
if (p) {
|
||||
p += 7; // Skip "type":
|
||||
while (*p == ' ' || *p == '"') p++;
|
||||
if (strncmp(p, "disabled", 8) == 0) {
|
||||
config.type = SCHEDULE_TYPE_DISABLED;
|
||||
} else if (strncmp(p, "interval", 8) == 0) {
|
||||
config.type = SCHEDULE_TYPE_INTERVAL;
|
||||
} else if (strncmp(p, "time_of_day", 11) == 0) {
|
||||
config.type = SCHEDULE_TYPE_TIME_OF_DAY;
|
||||
} else if (strncmp(p, "days_time", 9) == 0) {
|
||||
config.type = SCHEDULE_TYPE_DAYS_TIME;
|
||||
}
|
||||
}
|
||||
|
||||
// Parse enabled
|
||||
p = strstr(json, "\"enabled\":");
|
||||
if (p) {
|
||||
p += 10;
|
||||
while (*p == ' ') p++;
|
||||
config.enabled = (strncmp(p, "true", 4) == 0);
|
||||
}
|
||||
|
||||
// Parse interval_minutes for interval type
|
||||
if (config.type == SCHEDULE_TYPE_INTERVAL) {
|
||||
p = strstr(json, "\"interval_minutes\":");
|
||||
if (p) {
|
||||
p += 19;
|
||||
config.interval_minutes = atoi(p);
|
||||
}
|
||||
}
|
||||
|
||||
// Parse hour and minute for time-based types
|
||||
if (config.type == SCHEDULE_TYPE_TIME_OF_DAY || config.type == SCHEDULE_TYPE_DAYS_TIME) {
|
||||
p = strstr(json, "\"hour\":");
|
||||
if (p) {
|
||||
p += 7;
|
||||
config.hour = atoi(p);
|
||||
}
|
||||
|
||||
p = strstr(json, "\"minute\":");
|
||||
if (p) {
|
||||
p += 9;
|
||||
config.minute = atoi(p);
|
||||
}
|
||||
|
||||
// Parse days_mask for days_time type
|
||||
if (config.type == SCHEDULE_TYPE_DAYS_TIME) {
|
||||
p = strstr(json, "\"days_mask\":");
|
||||
if (p) {
|
||||
p += 12;
|
||||
config.days_mask = atoi(p);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse duration_ms
|
||||
p = strstr(json, "\"duration_ms\":");
|
||||
if (p) {
|
||||
p += 14;
|
||||
config.duration_ms = atoi(p);
|
||||
}
|
||||
|
||||
// Parse speed_percent
|
||||
p = strstr(json, "\"speed_percent\":");
|
||||
if (p) {
|
||||
p += 16;
|
||||
config.speed_percent = atoi(p);
|
||||
}
|
||||
|
||||
// Add the schedule
|
||||
return scheduler_add_schedule(pump_id, schedule_id, &config);
|
||||
}
|
||||
|
||||
// NVS persistence
|
||||
static esp_err_t save_schedule_to_nvs(uint8_t pump_id, uint8_t schedule_id)
|
||||
{
|
||||
nvs_handle_t nvs_handle;
|
||||
esp_err_t ret = nvs_open(SCHEDULER_NVS_NAMESPACE, NVS_READWRITE, &nvs_handle);
|
||||
if (ret != ESP_OK) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
char key[32];
|
||||
snprintf(key, sizeof(key), "sched_%d_%d", pump_id, schedule_id);
|
||||
|
||||
// Don't save runtime fields
|
||||
schedule_config_t config = s_scheduler.schedules[pump_id - 1][schedule_id];
|
||||
config.last_run = 0;
|
||||
config.next_run = 0;
|
||||
|
||||
ret = nvs_set_blob(nvs_handle, key, &config, sizeof(schedule_config_t));
|
||||
|
||||
if (ret == ESP_OK) {
|
||||
nvs_commit(nvs_handle);
|
||||
}
|
||||
|
||||
nvs_close(nvs_handle);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static esp_err_t load_schedule_from_nvs(uint8_t pump_id, uint8_t schedule_id)
|
||||
{
|
||||
nvs_handle_t nvs_handle;
|
||||
esp_err_t ret = nvs_open(SCHEDULER_NVS_NAMESPACE, NVS_READONLY, &nvs_handle);
|
||||
if (ret != ESP_OK) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
char key[32];
|
||||
snprintf(key, sizeof(key), "sched_%d_%d", pump_id, schedule_id);
|
||||
|
||||
size_t length = sizeof(schedule_config_t);
|
||||
ret = nvs_get_blob(nvs_handle, key, &s_scheduler.schedules[pump_id - 1][schedule_id], &length);
|
||||
|
||||
nvs_close(nvs_handle);
|
||||
|
||||
if (ret == ESP_OK) {
|
||||
ESP_LOGI(TAG, "Loaded schedule %d for pump %d from NVS", schedule_id, pump_id);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static esp_err_t save_global_settings(void)
|
||||
{
|
||||
nvs_handle_t nvs_handle;
|
||||
esp_err_t ret = nvs_open(SCHEDULER_NVS_NAMESPACE, NVS_READWRITE, &nvs_handle);
|
||||
if (ret != ESP_OK) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = nvs_set_u8(nvs_handle, "holiday_mode", s_scheduler.holiday_mode ? 1 : 0);
|
||||
|
||||
if (ret == ESP_OK) {
|
||||
nvs_commit(nvs_handle);
|
||||
}
|
||||
|
||||
nvs_close(nvs_handle);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static esp_err_t load_global_settings(void)
|
||||
{
|
||||
nvs_handle_t nvs_handle;
|
||||
esp_err_t ret = nvs_open(SCHEDULER_NVS_NAMESPACE, NVS_READONLY, &nvs_handle);
|
||||
if (ret != ESP_OK) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
uint8_t holiday_mode = 0;
|
||||
ret = nvs_get_u8(nvs_handle, "holiday_mode", &holiday_mode);
|
||||
if (ret == ESP_OK) {
|
||||
s_scheduler.holiday_mode = (holiday_mode != 0);
|
||||
}
|
||||
|
||||
nvs_close(nvs_handle);
|
||||
return ret;
|
||||
}
|
||||
|
||||
// Utility functions
|
||||
const char* scheduler_get_type_string(schedule_type_t type)
|
||||
{
|
||||
switch (type) {
|
||||
case SCHEDULE_TYPE_DISABLED: return "disabled";
|
||||
case SCHEDULE_TYPE_INTERVAL: return "interval";
|
||||
case SCHEDULE_TYPE_TIME_OF_DAY: return "time_of_day";
|
||||
case SCHEDULE_TYPE_DAYS_TIME: return "days_time";
|
||||
default: return "unknown";
|
||||
}
|
||||
}
|
||||
|
||||
const char* scheduler_get_days_string(uint8_t days_mask, char *buffer, size_t size)
|
||||
{
|
||||
if (!buffer || size == 0) {
|
||||
return "";
|
||||
}
|
||||
|
||||
buffer[0] = '\0';
|
||||
|
||||
if (days_mask == SCHEDULE_DAY_ALL) {
|
||||
strlcpy(buffer, "Daily", size);
|
||||
return buffer;
|
||||
}
|
||||
|
||||
if (days_mask == SCHEDULE_DAY_WEEKDAYS) {
|
||||
strlcpy(buffer, "Weekdays", size);
|
||||
return buffer;
|
||||
}
|
||||
|
||||
if (days_mask == SCHEDULE_DAY_WEEKEND) {
|
||||
strlcpy(buffer, "Weekends", size);
|
||||
return buffer;
|
||||
}
|
||||
|
||||
// Build custom day string
|
||||
const char *days[] = {"Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"};
|
||||
bool first = true;
|
||||
|
||||
for (int i = 0; i < 7; i++) {
|
||||
if (days_mask & (1 << i)) {
|
||||
if (!first) {
|
||||
strlcat(buffer, ",", size);
|
||||
}
|
||||
strlcat(buffer, days[i], size);
|
||||
first = false;
|
||||
}
|
||||
}
|
||||
|
||||
return buffer;
|
||||
}
|
||||
|
||||
// Callbacks
|
||||
void scheduler_register_trigger_callback(schedule_trigger_callback_t callback)
|
||||
{
|
||||
s_scheduler.trigger_callback = callback;
|
||||
}
|
||||
|
||||
void scheduler_register_status_callback(schedule_status_callback_t callback)
|
||||
{
|
||||
s_scheduler.status_callback = callback;
|
||||
}
|
||||
|
||||
// Manual trigger for testing
|
||||
esp_err_t scheduler_trigger_schedule(uint8_t pump_id, uint8_t schedule_id)
|
||||
{
|
||||
if (!s_scheduler.initialized) {
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
if (pump_id < 1 || pump_id > SCHEDULER_MAX_PUMPS ||
|
||||
schedule_id >= SCHEDULER_MAX_SCHEDULES_PER_PUMP) {
|
||||
return ESP_ERR_INVALID_ARG;
|
||||
}
|
||||
|
||||
xSemaphoreTake(s_scheduler.mutex, portMAX_DELAY);
|
||||
|
||||
schedule_config_t *schedule = &s_scheduler.schedules[pump_id - 1][schedule_id];
|
||||
|
||||
if (schedule->type == SCHEDULE_TYPE_DISABLED || !schedule->enabled) {
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
return ESP_ERR_INVALID_STATE;
|
||||
}
|
||||
|
||||
ESP_LOGI(TAG, "Manual trigger of schedule %d for pump %d", schedule_id, pump_id);
|
||||
|
||||
// Call trigger callback
|
||||
if (s_scheduler.trigger_callback) {
|
||||
s_scheduler.trigger_callback(pump_id, schedule_id,
|
||||
schedule->duration_ms,
|
||||
schedule->speed_percent);
|
||||
}
|
||||
|
||||
xSemaphoreGive(s_scheduler.mutex);
|
||||
|
||||
return ESP_OK;
|
||||
}
|
||||
124
main/scheduler.h
Normal file
124
main/scheduler.h
Normal file
@ -0,0 +1,124 @@
|
||||
#ifndef SCHEDULER_H
|
||||
#define SCHEDULER_H
|
||||
|
||||
#include <stdbool.h>
|
||||
#include <stdint.h>
|
||||
#include <time.h>
|
||||
#include "esp_err.h"
|
||||
|
||||
// Maximum number of schedules per pump
|
||||
#define SCHEDULER_MAX_SCHEDULES_PER_PUMP 4
|
||||
#define SCHEDULER_MAX_PUMPS 2
|
||||
|
||||
// Schedule types
|
||||
typedef enum {
|
||||
SCHEDULE_TYPE_DISABLED = 0,
|
||||
SCHEDULE_TYPE_INTERVAL, // Every X minutes
|
||||
SCHEDULE_TYPE_TIME_OF_DAY, // Daily at specific time
|
||||
SCHEDULE_TYPE_DAYS_TIME, // Specific days at specific time
|
||||
SCHEDULE_TYPE_MAX
|
||||
} schedule_type_t;
|
||||
|
||||
// Days of week bitmask (bit 0 = Sunday, bit 6 = Saturday)
|
||||
#define SCHEDULE_DAY_SUNDAY (1 << 0)
|
||||
#define SCHEDULE_DAY_MONDAY (1 << 1)
|
||||
#define SCHEDULE_DAY_TUESDAY (1 << 2)
|
||||
#define SCHEDULE_DAY_WEDNESDAY (1 << 3)
|
||||
#define SCHEDULE_DAY_THURSDAY (1 << 4)
|
||||
#define SCHEDULE_DAY_FRIDAY (1 << 5)
|
||||
#define SCHEDULE_DAY_SATURDAY (1 << 6)
|
||||
#define SCHEDULE_DAY_WEEKDAYS (SCHEDULE_DAY_MONDAY | SCHEDULE_DAY_TUESDAY | \
|
||||
SCHEDULE_DAY_WEDNESDAY | SCHEDULE_DAY_THURSDAY | \
|
||||
SCHEDULE_DAY_FRIDAY)
|
||||
#define SCHEDULE_DAY_WEEKEND (SCHEDULE_DAY_SATURDAY | SCHEDULE_DAY_SUNDAY)
|
||||
#define SCHEDULE_DAY_ALL 0x7F
|
||||
|
||||
// Schedule configuration
|
||||
typedef struct {
|
||||
schedule_type_t type;
|
||||
bool enabled;
|
||||
|
||||
// Timing configuration
|
||||
uint32_t interval_minutes; // For SCHEDULE_TYPE_INTERVAL
|
||||
uint8_t hour; // For TIME_OF_DAY and DAYS_TIME (0-23)
|
||||
uint8_t minute; // For TIME_OF_DAY and DAYS_TIME (0-59)
|
||||
uint8_t days_mask; // For DAYS_TIME (bitmask)
|
||||
|
||||
// Watering configuration
|
||||
uint32_t duration_ms; // How long to water (milliseconds)
|
||||
uint8_t speed_percent; // Pump speed (0-100)
|
||||
|
||||
// Runtime info (not saved to NVS)
|
||||
time_t last_run; // Last execution timestamp
|
||||
time_t next_run; // Next scheduled run
|
||||
} schedule_config_t;
|
||||
|
||||
// Schedule entry with ID
|
||||
typedef struct {
|
||||
uint8_t pump_id; // Which pump (1 or 2)
|
||||
uint8_t schedule_id; // Schedule slot (0-3)
|
||||
schedule_config_t config; // Schedule configuration
|
||||
} schedule_entry_t;
|
||||
|
||||
// Schedule status
|
||||
typedef struct {
|
||||
bool holiday_mode; // Global disable for all schedules
|
||||
bool time_synchronized; // Whether we have valid time
|
||||
time_t last_sync_time; // When time was last synchronized
|
||||
uint32_t active_schedules; // Number of active schedules
|
||||
} scheduler_status_t;
|
||||
|
||||
// Callbacks
|
||||
typedef void (*schedule_trigger_callback_t)(uint8_t pump_id, uint8_t schedule_id,
|
||||
uint32_t duration_ms, uint8_t speed_percent);
|
||||
typedef void (*schedule_status_callback_t)(const char* status_json);
|
||||
|
||||
// Scheduler functions
|
||||
esp_err_t scheduler_init(void);
|
||||
esp_err_t scheduler_deinit(void);
|
||||
|
||||
// Schedule management
|
||||
esp_err_t scheduler_add_schedule(uint8_t pump_id, uint8_t schedule_id,
|
||||
const schedule_config_t *config);
|
||||
esp_err_t scheduler_get_schedule(uint8_t pump_id, uint8_t schedule_id,
|
||||
schedule_config_t *config);
|
||||
esp_err_t scheduler_remove_schedule(uint8_t pump_id, uint8_t schedule_id);
|
||||
esp_err_t scheduler_enable_schedule(uint8_t pump_id, uint8_t schedule_id, bool enable);
|
||||
esp_err_t scheduler_clear_all_schedules(void);
|
||||
|
||||
// Time management
|
||||
esp_err_t scheduler_set_time(time_t current_time);
|
||||
esp_err_t scheduler_sync_time_ntp(void);
|
||||
bool scheduler_is_time_synchronized(void);
|
||||
time_t scheduler_get_current_time(void);
|
||||
|
||||
// Holiday mode
|
||||
esp_err_t scheduler_set_holiday_mode(bool enabled);
|
||||
bool scheduler_get_holiday_mode(void);
|
||||
|
||||
// Status and information
|
||||
esp_err_t scheduler_get_status(scheduler_status_t *status);
|
||||
esp_err_t scheduler_get_next_run_times(time_t *next_runs, size_t max_count);
|
||||
esp_err_t scheduler_get_all_schedules(schedule_entry_t *entries, size_t max_entries,
|
||||
size_t *count);
|
||||
|
||||
// JSON serialization for MQTT
|
||||
esp_err_t scheduler_schedule_to_json(uint8_t pump_id, uint8_t schedule_id,
|
||||
char *buffer, size_t buffer_size);
|
||||
esp_err_t scheduler_json_to_schedule(const char *json, uint8_t pump_id,
|
||||
uint8_t schedule_id);
|
||||
esp_err_t scheduler_status_to_json(char *buffer, size_t buffer_size);
|
||||
|
||||
// Callbacks
|
||||
void scheduler_register_trigger_callback(schedule_trigger_callback_t callback);
|
||||
void scheduler_register_status_callback(schedule_status_callback_t callback);
|
||||
|
||||
// Manual trigger (for testing)
|
||||
esp_err_t scheduler_trigger_schedule(uint8_t pump_id, uint8_t schedule_id);
|
||||
|
||||
// Utility functions
|
||||
const char* scheduler_get_type_string(schedule_type_t type);
|
||||
const char* scheduler_get_days_string(uint8_t days_mask, char *buffer, size_t size);
|
||||
time_t scheduler_calculate_next_run(const schedule_config_t *config, time_t from_time);
|
||||
|
||||
#endif // SCHEDULER_H
|
||||
Reference in New Issue
Block a user