Skip to content

Fix for issue with Sammelkalender_CH after update#5213

Open
JonasArnold wants to merge 7 commits intomampfes:masterfrom
JonasArnold:master
Open

Fix for issue with Sammelkalender_CH after update#5213
JonasArnold wants to merge 7 commits intomampfes:masterfrom
JonasArnold:master

Conversation

@JonasArnold
Copy link

These changes fix an issue which arose for some users after updating to integration version 2.11.0.

Issue:
Before version 2.11.0 the default Sammelgebiet was always used, since the API returned a single null Sammelgebiet. PR #4832 changed this. Unfortunately a new issue arose with this change. It is possible to not configure a Sammelgebiet at all (which some users did). In this case the integration now fails to fetch the waste collection dates.

Implemented Fix:
Instead of throwing an exception when the Sammelgebiet is missing (street parameter), a warning log entry is created which points out to the user that the street parameter should be configured. The first returned Sammelgebiet is then used to fetch the waste collection dates (similar behaviour as before PR #4832).

Improvements:
It would be cleaner to reconfigure all Configurations automatically to contain the default Sammelgebiet, which I do not know how to do.

previously the first sammelgebiet was used by default, which lead to a breaking change for some existing users, since there is a sammelgebiet required now but none configured.
@5ila5
Copy link
Collaborator

5ila5 commented Jan 3, 2026

I really don't like this. This means you can configure the source without a street and may get the wrong dates. And you can only see this in the logs once you finished configuration. This also “removes” the drop down generated by the argumentRequiredWithSuggestions.

To modify the existing sources, you could write a migration in def async_migrate_entry in custom_components/waste_collection_schedule/init_ui.py, but this is pretty complex in this case.

I think people who don't have a street configured need to do so. It's not much work and is better than letting users configure the source wrongly in the future.

@JonasArnold
Copy link
Author

I do agree with you.

The reason why I am proposing this is that the workflow to "add a street" when there is none configured is not very user friendly.

I click "Reconfigure" on the hub.
brave_screenshot_1

I insert the street and click submit. Then a new entry is created below the old one in the same hub and the old gets "unavailable":
brave_screenshot_2

This is worse than deleting the hub completely, since there is no way to get rid of the first entry.
Also if you have sensors configured, you need to completely reconfigure them from scratch since they are unavailable too..

Should this work better?

@5ila5
Copy link
Collaborator

5ila5 commented Jan 4, 2026

I noticed this issue as well. I thought the sensors should stay (they might get recreated).

It's how we set the unique id:

await self.async_set_unique_id(source + json.dumps(args_input))

So if the source arguments change, the unique id changes. But I don't really know how to fix this while allowing multiple sources for the same provider.

@JonasArnold
Copy link
Author

I really went into the rabbithole and tried to find a solution for the reconfiguration issue.
According to the documentation for Config flows the method async_update_reload_and_abort is not made to handle a changing unique id.

Since currently the unique id is dependant on the exact arguments, the user provides when configuring or reconfiguring the configuration, the unique id obviously changes when the user reconfigures and changes the address.

The reconfigure flow can only be called per Hub. So if multiple Configs exist in one Hub this is bad, since there is no way to remove/change individual Configs.

Therefore I recommend making the unique id only dependent on the Source + Municipality and not the street argument.
Then, in the reconfig flow, this should be done, to check wether the unique id has changed:

self.async_set_unique_id(<generated unique id from Source + Municipality>)
self._abort_if_unique_id_mismatch()

Obviously we would need to somehow migrate for existing unique id's or if a unique id exists not re-generate it in the reconfig flow.

Let me know what you think. My solution is not yet implemented!

Also I probably found the reason the sensors are not generated when reconfiguring:
In the reconfiguration method, the title cannot be edited by the user and the module.TITLE is used.


In the configuration method this is a bit more complex and the self._title will be used

@5ila5
Copy link
Collaborator

5ila5 commented Jan 5, 2026

I only have time for a short reply, I'll look at this in more detail soon.

This ID from Source + Municipality has two problems: I don't know how to properly do this for all sources. We don't have municipalities for all sources.: what about ICS source, where the only argument may be a URL. I don't think matching some arguments to be used to generate a unique_id and some not is a bad idea, as it does not scale to all the different sources we currently have.

Additionally, this would make it impossible to have two Hubs / Sources for the same municipalities with a different address.

@5ila5
Copy link
Collaborator

5ila5 commented Jan 28, 2026

I don't have the time to look into this in the appropriate detail.

I think there are two issues, in the reconfigure set unique_id for the config entry, but as far as I can tell this unqe id is not used itself for hte createion of the enteties. There this is probably the proble:

https://github.com/mampfes/hacs_waste_collection_schedule/blob/dc3833b46340ab7153ea118399640f0896eda09a/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py#L235C1-L250C58

As far as I understand it we need to not call set_unique_id if we are in the reconfigure step and then use the entry.unique_id for the SourceShell instead of creating new unique IDs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants