Skip to content

Curated AI news aggregator from premium sources - Auto-updated with webhook, paginated display

Notifications You must be signed in to change notification settings

ThePhoenixAgency/AI-Pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,045 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-PULSE Banner

Curated content from the best sources

GitHub Profile Repository Reader Documentation Support

Last Update: Fri, 27 Feb 2026 22:16:53 GMT


About The Developer

Built by ThePhoenixAgency - AI & Cybersecurity Specialist

Passionate about building secure, privacy-first applications that make a difference. This project showcases my expertise in full-stack development, security engineering, and data privacy.


Real-Time News Roundup

AI - Artificial Intelligence / IA - Intelligence Artificielle

Source: VentureBeat AI The artificial intelligence coding revolution comes with a catch: it's expensive. Claude Code, Anthropic's terminal-based AI agent that can write, debug, and deploy code autonomously, has captured the imagination of software developers worldwide. But its pricing — ranging from $20 to $200 per month depending on usage — has sparked a growing rebellion among the very programmers it aims to serve. Now, a free alternative is gaining traction. Goose, an open-source AI agent developed by Block (the financial technology company formerly known as Square), offers nearly identical functionality to Claude Code but runs entirely on a user's local machine. No subscription fees. No cloud dependency. No rate limits that reset every five hours. "Your data stays with you, period," said Parth Sareen, a software engineer who demonstrated the tool during a recent livestream. The captures the core appeal: Goose gives developers complete control over their AI-powered workflow, including the ability to work offline — even on an airplane. The project has exploded in popularity. Goose now boasts more than 26,100 stars on GitHub, the code-sharing platform, with 362 contributors and 102 releases since its launch. The latest version, 1.20.1, shipped on January 19, 2026, reflecting a development pace that rivals commercial products. For developers frustrated by Claude Code's pricing structure and usage caps, Goose represents something increasingly rare in the AI industry: a genuinely free, no-strings-attached option for serious work. Anthropic's new rate limits spark a developer revolt To understand why Goose matters, you need to understand the Claude Code pricing controversy. Anthropic, the San Francisco artificial intelligence company founded by former OpenAI executives, offers Claude Code as part of its subscription tiers. The free plan provides no access whatsoever. The Pro plan, at $17 per month with annual billing (or $20 monthly), limits users to just 10 to 40 prompts every five hours — a constraint that serious developers exhaust within minutes of intensive work. The Max plans, at $100 and $200 per month, offer more headroom: 50 to 200 prompts and 200 to 800 prompts respectively, plus access to Anthropic's most powerful model, Claude 4.5 Opus. But even these premium tiers come with restrictions that have inflamed the developer community. In late July, Anthropic announced new weekly rate limits. Under the system, Pro users receive 40 to 80 hours of Sonnet 4 usage per week. Max users at the $200 tier get 240 to 480 hours of Sonnet 4, plus 24 to 40 hours of Opus 4. Nearly five months later, the frustration has not subsided. The problem? Those "hours" are not actual hours. They represent token-based limits that vary wildly depending on codebase size, conversation length, and the complexity of the code being processed. Independent analysis suggests the actual per-session limits translate to roughly 44,000 tokens for Pro users and 220,000 tokens for the $200 Max plan. "It's confusing and vague," one developer wrote in a widely shared analysis. "When they say '24-40 hours of Opus 4,' that doesn't really tell you anything useful about what you're actually getting." The backlash on Reddit and developer forums has been fierce. Some users report hitting their daily limits within 30 minutes of intensive coding. Others have canceled their subscriptions entirely, calling the new restrictions "a joke" and "unusable for real work." Anthropic has defended the changes, stating that the limits affect fewer than five percent of users and target people running Claude Code "continuously in the background, 24/7." But the company has not clarified whether that figure refers to five percent of Max subscribers or five percent of all users — a distinction that matters enormously. How Block built a free AI coding agent that works offline Goose takes a radically different approach to the same problem. Built by Block, the payments company led by Jack Dorsey, Goose is what engineers call an "on-machine AI agent." Unlike Claude Code, which sends your queries to Anthropic's servers for processing, Goose can run entirely on your local computer using open-source language models that you download and control yourself. The project's documentation describes it as going "beyond code suggestions" to "install, execute, edit, and test with any LLM." That last phrase — "any LLM" — is the key differentiator. Goose is model-agnostic by design. You can connect Goose to Anthropic's Claude models if you have API access. You can use OpenAI's GPT-5 or Google's Gemini. You can route it through services like Groq or OpenRouter. Or — and this is where things get interesting — you can run it entirely locally using tools like Ollama, which let you download and execute open-source models on your own hardware. The practical implications are significant. With a local setup, there are no subscription fees, no usage caps, no rate limits, and no concerns about your code being sent to external servers. Your conversations with the AI never leave your machine. "I use Ollama all the time on planes — it's a lot of fun!" Sareen noted during a demonstration, highlighting how local models free developers from the constraints of internet connectivity. What Goose can do that traditional code assistants can't Goose operates as a command-line tool or desktop application that can autonomously perform complex development tasks. It can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows across multiple files, and interact with external APIs — all without constant human oversight. The architecture relies on what the AI industry calls "tool calling" or "function calling" — the ability for a language model to request specific actions from external systems. When you ask Goose to create a new file, run a test suite, or check the status of a GitHub pull request, it doesn't just generate text describing what should happen. It actually executes those operations. This capability depends heavily on the underlying language model. Claude 4 models from Anthropic currently perform best at tool calling, according to the Berkeley Function-Calling Leaderboard, which ranks models on their ability to translate natural language requests into executable code and system commands. But newer open-source models are catching up quickly. Goose's documentation highlights several options with strong tool-calling support: Meta's Llama series, Alibaba's Qwen models, Google's Gemma variants, and DeepSeek's reasoning-focused architectures. The tool also integrates with the Model Context Protocol, or MCP, an emerging standard for connecting AI agents to external services. Through MCP, Goose can access databases, search engines, file systems, and third-party APIs — extending its capabilities far beyond what the base language model provides. Setting Up Goose with a Local Model For developers interested in a completely free, privacy-preserving setup, the process involves three main components: Goose itself, Ollama (a tool for running open-source models locally), and a compatible language model. Step 1: Install Ollama Ollama is an open-source project that dramatically simplifies the process of running large language models on personal hardware. It handles the complex work of downloading, optimizing, and serving models through a simple interface. Download and install Ollama from ollama.com. Once installed, you can pull models with a single command. For coding tasks, Qwen 2.5 offers strong tool-calling support: ollama run qwen2.5 The model downloads automatically and begins running on your machine. Step 2: Install Goose Goose is available as both a desktop application and a command-line interface. The desktop version provides a more visual experience, while the CLI appeals to developers who prefer working entirely in the terminal.

Source: VentureBeat AI Nous Research, the open-source artificial intelligence startup backed by crypto venture firm Paradigm, released a new competitive programming model on Monday that it says matches or exceeds several larger proprietary systems — trained in just four days using 48 of Nvidia's latest B200 graphics processors. The model, called NousCoder-14B, is another entry in a crowded field of AI coding assistants, but arrives at a particularly charged moment: Claude Code, the agentic programming tool from rival Anthropic, has dominated social media discussion since New Year's Day, with developers posting breathless testimonials about its capabilities. The simultaneous developments underscore how quickly AI-assisted software development is evolving — and how fiercely companies large and small are competing to capture what many believe will become a foundational technology for how software gets written. type: embedded-entry-inline id: 74cSyrq6OUrp9SEQ5zOUSl NousCoder-14B achieves a 67.87 percent accuracy rate on LiveCodeBench v6, a standardized evaluation that tests models on competitive programming problems published between August 2024 and May 2025. That figure represents a 7.08 percentage point improvement over the base model it was trained from, Alibaba's Qwen3-14B, according to Nous Research's technical report published alongside the release. "I gave Claude Code a description of the problem, it generated what we built last year in an hour," wrote Jaana Dogan, a principal engineer at Google responsible for the Gemini API, in a viral post on X last week that captured the prevailing mood around AI coding tools. Dogan was describing a distributed agent orchestration system her team had spent a year developing — a system Claude Code approximated from a three-paragraph prompt. The juxtaposition is instructive: while Anthropic's Claude Code has captured imaginations with demonstrations of end-to-end software development, Nous Research is betting that open-source alternatives trained on verifiable problems can close the gap — and that transparency in how these models are built matters as much as raw capability. How Nous Research built an AI coding model that anyone can replicate What distinguishes the NousCoder-14B release from many competitor announcements is its radical openness. Nous Research published not just the model weights but the complete reinforcement learning environment, benchmark suite, and training harness — built on the company's Atropos framework — enabling any researcher with sufficient compute to reproduce or extend the work. "Open-sourcing the Atropos stack provides the necessary infrastructure for reproducible olympiad-level reasoning research," noted one observer on X, summarizing the significance for the academic and open-source communities. The model was trained by Joe Li, a researcher in residence at Nous Research and a former competitive programmer himself. Li's technical report reveals an unexpectedly personal dimension: he compared the model's improvement trajectory to his own journey on Codeforces, the competitive programming platform where participants earn ratings based on contest performance. Based on rough estimates mapping LiveCodeBench scores to Codeforces ratings, Li calculated that NousCoder-14B's improvemen t— from approximately the 1600-1750 rating range to 2100-2200 — mirrors a leap that took him nearly two years of sustained practice between ages 14 and 16. The model accomplished the equivalent in four days. "Watching that final training run unfold was quite a surreal experience," Li wrote in the technical report. But Li was quick to note an important caveat that speaks to broader questions about AI efficiency: he solved roughly 1,000 problems during those two years, while the model required 24,000. Humans, at least for now, remain dramatically more sample-efficient learners. Inside the reinforcement learning system that trains on 24,000 competitive programming problems NousCoder-14B's training process offers a window into the increasingly sophisticated techniques researchers use to improve AI reasoning capabilities through reinforcement learning. The approach relies on what researchers call "verifiable rewards" — a system where the model generates code solutions, those solutions are executed against test cases, and the model receives a simple binary signal: correct or incorrect. This feedback loop, while conceptually straightforward, requires significant infrastructure to execute at scale. Nous Research used Modal, a cloud computing platform, to run sandboxed code execution in parallel. Each of the 24,000 training problems contains hundreds of test cases on average, and the system must verify that generated code produces correct outputs within time and memory constraints — 15 seconds and 4 gigabytes, respectively. The training employed a technique called DAPO (Dynamic Sampling Policy Optimization), which the researchers found performed slightly better than alternatives in their experiments. A key innovation involves "dynamic sampling" — discarding training examples where the model either solves all attempts or fails all attempts, since these provide no useful gradient signal for learning. The researchers also adopted "iterative context extension," first training the model with a 32,000-token context window before expanding to 40,000 tokens. During evaluation, extending the context further to approximately 80,000 tokens produced the best results, with accuracy reaching 67.87 percent. Perhaps most significantly, the training pipeline overlaps inference and verification — as soon as the model generates a solution, it begins work on the next problem while the previous solution is being checked. This pipelining, combined with asynchronous training where multiple model instances work in parallel, maximizes hardware utilization on expensive GPU clusters. The looming data shortage that could slow AI coding model progress Buried in Li's technical report is a finding with significant implications for the future of AI development: the training dataset for NousCoder-14B encompasses "a significant portion of all readily available, verifiable competitive programming problems in a standardized dataset format." In other words, for this particular domain, the researchers are approaching the limits of high-quality training data. "The total number of competitive programming problems on the Internet is roughly the same order of magnitude," Li wrote, referring to the 24,000 problems used for training. "This suggests that within the competitive programming domain, we have approached the limits of high-quality data." This observation echoes growing concern across the AI industry about data constraints. While compute continues to scale according to well-understood economic and engineering principles, training data is "increasingly finite," as Li put it. "It appears that some of the most important research that needs to be done in the future will be in the areas of synthetic data generation and data efficient algorithms and architectures," he concluded. The challenge is particularly acute for competitive programming because the domain requires problems with known correct solutions that can be verified automatically. Unlike natural language tasks where human evaluation or proxy metrics suffice, code either works or it doesn't — making synthetic data generation considerably more difficult. Li identified one potential avenue: training models not just to solve problems but to generate solvable problems, enabling a form of self-play similar to techniques that proved successful in game-playing AI systems. "Once synthetic problem generation is solved, self-play becomes a very interesting direction," he wrote. A $65 million bet that open-source AI can compete with Big Tech

Source: VentureBeat AI Salesforce on Tuesday launched an entirely rebuilt version of Slackbot, the company's workplace assistant, transforming it from a simple notification tool into what executives describe as a fully powered AI agent capable of searching enterprise data, drafting documents, and taking action on behalf of employees. The new Slackbot, now generally available to Business+ and Enterprise+ customers, is Salesforce's most aggressive move yet to position Slack at the center of the emerging "agentic AI" movement — where software agents work alongside humans to complete complex tasks. The launch comes as Salesforce attempts to convince investors that artificial intelligence will bolster its products rather than render them obsolete. "Slackbot isn't just another copilot or AI assistant," said Parker Harris, Salesforce co-founder and Slack's chief technology officer, in an exclusive interview with Salesforce. "It's the front door to the agentic enterprise, powered by Salesforce." From tricycle to Porsche: Salesforce rebuilt Slackbot from the ground up Harris was blunt about what distinguishes the new Slackbot from its predecessor: "The old Slackbot was, you know, a little tricycle, and the new Slackbot is like, you know, a Porsche." The original Slackbot, which has existed since Slack's early days, performed basic algorithmic tasks — reminding users to add colleagues to documents, suggesting channel archives, and delivering simple notifications. The new version runs on an entirely different architecture built around a large language model and sophisticated search capabilities that can access Salesforce records, Google Drive files, calendar data, and years of Slack conversations. "It's two different things," Harris explained. "The old Slackbot was algorithmic and fairly simple. The new Slackbot is brand new — it's based around an LLM and a very robust search engine, and connections to third-party search engines, third-party enterprise data." Salesforce chose to retain the Slackbot brand despite the fundamental technical overhaul. "People know what Slackbot is, and so we wanted to carry that forward," Harris said. Why Anthropic's Claude powers the new Slackbot — and which AI models could come next The new Slackbot runs on Claude, Anthropic's large language model, a choice driven partly by compliance requirements. Slack's commercial service operates under FedRAMP Moderate certification to serve U.S. federal government customers, and Harris said Anthropic was "the only provider that could give us a compliant LLM" when Slack began building the new system. But that exclusivity won't last. "We are, this year, going to support additional providers," Harris said. "We have a great relationship with Google. Gemini is incredible — performance is great, cost is great. So we're going to use Gemini for some things." He added that OpenAI remains a possibility as well. Harris echoed Salesforce CEO Marc Benioff's view that large language models are becoming commoditized: "You've heard Marc talk about LLMs are commodities, that they're democratized. I call them CPUs." On the sensitive question of training data, Harris was unequivocal: Salesforce does not train any models on customer data. "Models don't have any sort of security," he explained. "If we trained it on some confidential conversation that you and I have, I don't want Carolyn to know — if I train it into the LLM, there is no way for me to say you get to see the answer, but Carolyn doesn't." Inside Salesforce's internal experiment: 80,000 employees tested Slackbot with striking results Salesforce has been testing the new Slackbot internally for months, rolling it out to all 80,000 employees. According to Ryan Gavin, Slack's chief marketing officer, the results have been striking: "It's the fastest adopted product in Salesforce history." Internal data shows that two-thirds of Salesforce employees have tried the new Slackbot, with 80% of those users continuing to use it regularly. Internal satisfaction rates reached 96% — the highest for any AI feature Slack has shipped. Employees report saving between two and 20 hours per week. The adoption happened largely organically. "I think it was about five days, and a Canvas was developed by our employees called 'The Most Stealable Slackbot Prompts,'" Gavin said. "People just started adding to it organically. I think it's up to 250-plus prompts that are in this Canvas right now." Kate Crotty, a principal UX researcher at Salesforce, found that 73% of internal adoption was driven by social sharing rather than top-down mandates. "Everybody is there to help each other learn and communicate hacks," she said. How Slackbot transforms scattered enterprise data into executive-ready insights During a product demonstration, Amy Bauer, Slack's product experience designer, showed how Slackbot can synthesize information across multiple sources. In one example, she asked Slackbot to analyze customer feedback from a pilot program, upload an image of a usage dashboard, and have Slackbot correlate the qualitative and quantitative data. "This is where Slackbot really earns its keep for me," Bauer explained. "What it's doing is not just simply reading the image — it's actually looking at the image and comparing it to the insight it just generated for me." Slackbot can then query Salesforce to find enterprise accounts with open deals that might be good candidates for early access, creating what Bauer called "a really great justification and plan to move forward." Finally, it can synthesize all that information into a Canvas — Slack's collaborative document format — and find calendar availability among stakeholders to schedule a review meeting. "Up until this point, we have been working in a one-to-one capacity with Slackbot," Bauer said. "But one of the benefits that I can do now is take this insight and have it generate this into a Canvas, a shared workspace where I can iterate on it, refine it with Slackbot, or share it out with my team." Rob Seaman, Slack's chief product officer, said the Canvas creation demonstrates where the product is heading: "This is making a tool call internally to Slack Canvas to actually write, effectively, a shared document. But it signals where we're going with Slackbot — we're eventually going to be adding in additional third-party tool calls." MrBeast's company became a Slackbot guinea pig—and employees say they're saving 90 minutes a day Among Salesforce's pilot customers is Beast Industries, the parent company of YouTube star MrBeast. Luis Madrigal, the company's chief information officer, joined the launch announcement to describe his experience. "As somebody who has rolled out enterprise technologies for over two decades now, this was practically one of the easiest," Madrigal said. "The plumbing is there. Slack as an implementation, Enterprise Tools — being able to turn on the Slackbot and the Slack AI functionality was as simple as having my team go in, review, do a quick security review." Madrigal said his security team signed off "rather quickly" — unusual for enterprise AI deployments — because Slackbot accesses only the information each individual user already has permission to view. "Given all the guardrails you guys have put into place for Slackbot to be unique and customized to only the information that each individual user has, only the conversations and the Slack rooms and Slack channels that they're part of—that made my security team sign off rather quickly." One Beast Industries employee, Sinan, the head of Beast Games marketing, reported saving "at bare minimum, 90 minutes a day." Another employee, Spencer, a creative supervisor, described it as "an assistant who's paying attention when I'm not." Other pilot customers include Slalom, reMarkable, Xero, Mercari, and Engine.

Source: VentureBeat AI Railway, a San Francisco-based cloud platform that has quietly amassed two million developers without spending a dollar on marketing, announced Thursday that it raised $100 million in a Series B funding round, as surging demand for artificial intelligence applications exposes the limitations of legacy cloud infrastructure. TQ Ventures led the round, with participation from FPV Ventures, Redpoint, and Unusual Ventures. The investment values Railway as one of the most significant infrastructure startups to emerge during the AI boom, capitalizing on developer frustration with the complexity and cost of traditional platforms like Amazon Web Services and Google Cloud. "As AI models get better at writing code, more and more people are asking the age-old question: where, and how, do I run my applications?" said Jake Cooper, Railway's 28-year-old founder and chief executive, in an exclusive interview with VentureBeat. "The last generation of cloud primitives were slow and outdated, and now with AI moving everything faster, teams simply can't keep up." The funding is a dramatic acceleration for a company that has charted an unconventional path through the cloud computing industry. Railway raised just $24 million in total before this round, including a $20 million Series A from Redpoint in 2022. The company now processes more than 10 million deployments monthly and handles over one trillion requests through its edge network — metrics that rival far larger and better-funded competitors. Why three-minute deploy times have become unacceptable in the age of AI coding assistants Railway's pitch rests on a simple observation: the tools developers use to deploy and manage software were designed for a slower era. A standard build-and-deploy cycle using Terraform, the industry-standard infrastructure tool, takes two to three minutes. That delay, once tolerable, has become a critical bottleneck as AI coding assistants like Claude, ChatGPT, and Cursor can generate working code in seconds. "When godly intelligence is on tap and can solve any problem in three seconds, those amalgamations of systems become bottlenecks," Cooper told VentureBeat. "What was really cool for humans to deploy in 10 seconds or less is now table stakes for agents." The company claims its platform delivers deployments in under one second — fast enough to keep pace with AI-generated code. Customers report a tenfold increase in developer velocity and up to 65 percent cost savings compared to traditional cloud providers. These numbers come directly from enterprise clients, not internal benchmarks. Daniel Lobaton, chief technology officer at G2X, a platform serving 100,000 federal contractors, measured deployment speed improvements of seven times faster and an 87 percent cost reduction after migrating to Railway. His infrastructure bill dropped from $15,000 per month to approximately $1,000. "The work that used to take me a week on our previous infrastructure, I can do in Railway in like a day," Lobaton said. "If I want to spin up a new service and test different architectures, it would take so long on our old setup. In Railway I can launch six services in two minutes." Inside the controversial decision to abandon Google Cloud and build data centers from scratch What distinguishes Railway from competitors like Render and Fly.io is the depth of its vertical integration. In 2024, the company made the unusual decision to abandon Google Cloud entirely and build its own data centers, a move that echoes the famous Alan Kay maxim: "People who are really serious about software should make their own hardware." "We wanted to design hardware in a way where we could build a differentiated experience," Cooper said. "Having full control over the network, compute, and storage layers lets us do really fast build and deploy loops, the kind that allows us to move at 'agentic speed' while staying 100 percent the smoothest ride in town." The approach paid dividends during recent widespread outages that affected major cloud providers — Railway remained online throughout. This soup-to-nuts control enables pricing that undercuts the hyperscalers by roughly 50 percent and newer cloud startups by three to four times. Railway charges by the second for actual compute usage: $0.00000386 per gigabyte-second of memory, $0.00000772 per vCPU-second, and $0.00000006 per gigabyte-second of storage. There are no charges for idle virtual machines — a stark contrast to the traditional cloud model where customers pay for provisioned capacity whether they use it or not. "The conventional wisdom is that the big guys have economies of scale to offer better pricing," Cooper noted. "But when they're charging for VMs that usually sit idle in the cloud, and we've purpose-built everything to fit much more density on these machines, you have a big opportunity." How 30 employees built a platform generating tens of millions in annual revenue Railway has achieved its scale with a team of just 30 employees generating tens of millions in annual revenue — a ratio of revenue per employee that would be exceptional even for established software companies. The company grew revenue 3.5 times last year and continues to expand at 15 percent month-over-month. Cooper emphasized that the fundraise was strategic rather than necessary. "We're default alive; there's no reason for us to raise money," he said. "We raised because we see a massive opportunity to accelerate, not because we needed to survive." The company hired its first salesperson only last year and employs just two solutions engineers. Nearly all of Railway's two million users discovered the platform through word of mouth — developers telling other developers about a tool that actually works. "We basically did the standard engineering thing: if you build it, they will come," Cooper recalled. "And to some degree, they came." From side projects to Fortune 500 deployments: Railway's unlikely corporate expansion Despite its grassroots developer community, Railway has made significant inroads into large organizations. The company claims that 31 percent of Fortune 500 companies now use its platform, though deployments range from company-wide infrastructure to individual team projects. Notable customers include Bilt, the loyalty program company; Intuit's GoCo subsidiary; TripAdvisor's Cruise Critic; and MGM Resorts. Kernel, a Y Combinator-backed startup providing AI infrastructure to over 1,000 companies, runs its entire customer-facing system on Railway for $444 per month. "At my previous company Clever, which sold for $500 million, I had six full-time engineers just managing AWS," said Rafael Garcia, Kernel's chief technology officer. "Now I have six engineers total, and they all focus on product. Railway is exactly the tool I wish I had in 2012." For enterprise customers, Railway offers security certifications including SOC 2 Type 2 compliance and HIPAA readiness, with business associate agreements available upon request. The platform provides single sign-on authentication, comprehensive audit logs, and the option to deploy within a customer's existing cloud environment through a "bring your own cloud" configuration. Enterprise pricing starts at custom levels, with specific add-ons for extended log retention ($200 monthly), HIPAA BAAs ($1,000), enterprise support with SLOs ($2,000), and dedicated virtual machines ($10,000). The startup's bold strategy to take on Amazon, Google, and a new generation of cloud rivals Railway enters a crowded market that includes not only the hyperscale cloud providers—Amazon Web Services, Microsoft Azure, and Google Cloud Platform—but also a growing cohort of developer-focused platforms like Vercel, Render, Fly.io, and Heroku. Cooper argues that Railway's competitors fall into two camps, neither of which has fully committed to the new infrastructure model that AI demands.

Source: VentureBeat AI Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users — and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself. The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools. "Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers — Anthropic's power-user tier priced between $100 and $200 per month — through the macOS desktop application. For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding. How developers using a coding tool for vacation research inspired Anthropic's latest product The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor. According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks. "Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising — the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model." Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone — not just developers — to work with Claude in the very same way." Inside the folder-based architecture that lets Claude read, edit, and create files on your computer Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones. Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents. "In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes." The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously — a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker." The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks." The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built — highlighting a recursive feedback loop where AI tools are being used to build better AI tools. During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half. Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last... week and a half?!" This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?" The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion — a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not. Connectors, browser automation, and skills extend Cowork's reach beyond the local file system Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors — tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions. Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser extension, to execute tasks requiring web access. This combination allows the agent to navigate websites, click buttons, fill forms, and extract information from the internet — all while operating from the desktop application. "Cowork includes a number of novel UX and safety features that we think make the product really special," Cherny explained, highlighting "a built-in VM [virtual machine] for isolation, out of the box support for browser automation, support for all your claude.ai data connectors, asking you for clarification when it's unsure." Anthropic has also introduced an initial set of "skills" specifically designed for Cowork that enhance Claude's ability to create documents, presentations, and other files. These build on the Skills for Claude framework the company announced in October, which provides specialized instruction sets Claude can load for particular types of tasks. Why Anthropic is warning users that its own AI agent could delete their files The transition from a chatbot that suggests edits to an agent that makes edits introduces significant risk. An AI that can organize files can, theoretically, delete them. In a notable display of transparency, Anthropic devoted considerable space in its announcement to warning users about Cowork's potential dangers — an unusual approach for a product launch. The company explicitly acknowledges that Claude "can take potentially destructive actions (such as deleting local files) if it's instructed to." Because Claude might occasionally misinterpret instructions, Anthropic urges users to provide "very clear guidance" about sensitive operations.

Source: Hugging Face Blog Back to Articles GOAL: End-to-end Machine Learning experiments Setup and Install Install Codex Install the Hugging Face Skills Connect to Hugging Face Your first AI Experiment Instruct Codex to do an end-to-end fine-tuning experiment Updating the Training Report Dataset Validation Review Before Submitting Track Progress using the Training Report Use Your Model Hardware and Cost What's Next Resources Codex Hugging Face Skills Building on our work to get Claude Code to train open source models, we are now getting Codex to go further. We gave Codex access to the Hugging Face Skills repository, which contains skills for Machine Learning and AI tasks such as training or evaluating models. With HF skills, a coding agent can: Fine-tune and apply RL alignment on language models Review, explain, and act on live training metrics from Trackio Evaluate checkpoints and act on evaluation results Create reports from experiments Export to and quantize models with GGUF for local deployment Publish models to the Hub This tutorial dives even deeper and shows you how it works and how to use it yourself. So let's get started. Codex uses AGENTS.md files to accomplish specialized tasks, whilst Claude Code uses 'Skills'. Fortunately, 'HF-skills' is compatible with both approaches and works with major coding agents like Claude Code, Codex, or Gemini CLI. With HF-s

Source: Hugging Face Blog Back to Articles The "Black Box" Problem of Agent Benchmarks The Experiment: Diagnosing ITBench Agents Finding 1: Stronger models like Gemini-3-Flash shows surgical (isolated failure modes) per trace whereas open sourced Kimi-K2 and GPT-oss-120b show compounding failure patterns Finding 2: "Non-Fatal" vs. "Fatal" Failures The "Non-Fatal" (Benign) Flaws The "Fatal" Flaws Case Study: Gemini-3-Flash (Decisive but Overconfident) Case Study: GPT-OSS-120B A different (and more useful) way to read the plots: “fatal” vs “non-fatal” Recoverable / structural (show up even in successful traces) Fatal / decisive (strongly associated with failed traces) Conclusion Ayhan Sebin Saurabh Jha Rohan Arora Daby Sow Mert Cemri Melissa Pan Ion Stoica ITBench HF Space ITBench HF Dataset MAST HF Dataset ITBench Github MAST Github IBM Research and UC Berkeley collaborated to study how agentic LLM systems break in real-world IT automation, for tasks involving incident triage, logs/metrics queries, and Kubernetes actions in long-horizon tool loops. Benchmarks typically reduce performance to a single number, telling you whether an agent failed but never why. To solve this black-box problem, we applied MAST (Multi-Agent System Failure Taxonomy), an emerging practice for diagnosing agentic reliability ). By leveraging MAST to analyze ITBench—the industry benchmark for SRE, Se

Source: VentureBeat AI Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over 100 engineers, but competing against Mark Zuckerberg's $100 million offers seemed impossible. So he spent $5,000 — a fifth of his marketing budget — on a billboard in San Francisco displaying what looked like gibberish: five strings of random numbers. The numbers were actually AI tokens. Decoded, they led to a coding challenge: build an algorithm to act as a digital bouncer at Berghain, the Berlin nightclub famous for rejecting nearly everyone at the door. Within days, thousands attempted the puzzle. 430 cracked it. Some got hired. The winner flew to Berlin, all expenses paid. That unconventional approach has now attracted $69 million in Series B funding, led by Ribbit Capital with participation from Evantic and existing investors Sequoia Capital, Conviction, and Pear VC. The round values Listen Labs at $500 million and brings its total capital to $100 million. In nine months since launch, the company has grown annualized revenue by 15x to eight figures and conducted over one million AI-powered interviews. "When you obsess over customers, everything else follows," Wahlforss said in an interview with VentureBeat. "Teams that use Listen bring the customer into every decision, from marketing to product, and when the customer is delighted, everyone is." Why traditional market research is broken, and what Listen Labs is building to fix it Listen's AI researcher finds participants, conducts in-depth interviews, and delivers actionable insights in hours, not weeks. The platform replaces the traditional choice between quantitative surveys — which provide statistical precision but miss nuance—and qualitative interviews, which deliver depth but cannot scale. Wahlforss explained the limitation of existing approaches: "Essentially surveys give you false precision because people end up answering the same question... You can't get the outliers. People are actually not honest on surveys." The alternative, one-on-one human interviews, "gives you a lot of depth. You can ask follow up questions. You can kind of double check if they actually know what they're talking about. And the problem is you can't scale that." The platform works in four steps: users create a study with AI assistance, Listen recruits participants from its global network of 30 million people, an AI moderator conducts in-depth interviews with follow-up questions, and results are packaged into executive-ready reports including key themes, highlight reels, and slide decks. What distinguishes Listen's approach is its use of open-ended video conversations rather than multiple-choice forms. "In a survey, you can kind of guess what you should answer, and you have four options," Wahlforss said. "Oh, they probably want me to buy high income. Let me click on that button versus an open ended response. It just generates much more honesty." The dirty secret of the $140 billion market research industry: rampant fraud Listen finds and qualifies the right participants in its global network of 30 million people. But building that panel required confronting what Wahlforss called "one of the most shocking things that we've learned when we entered this industry"—rampant fraud. "Essentially, there's a financial transaction involved, which means there will be bad players," he explained. "We actually had some of the largest companies, some of them have billions in revenue, send us people who claim to be kind of enterprise buyers to our platform and our system immediately detected, like, fraud, fraud, fraud, fraud, fraud." The company built what it calls a "quality guard" that cross-references LinkedIn profiles with video responses to verify identity, checks consistency across how participants answer questions, and flags suspicious patterns. The result, according to Wahlforss: "People talk three times more. They're much more honest when they talk about sensitive topics like politics and mental health." Emeritus, an online education company that uses Listen, reported that approximately 20% of survey responses previously fell into the fraudulent or low-quality category. With Listen, they reduced this to almost zero. "We did not have to replace any responses because of fraud or gibberish information," said Gabrielli Tiburi, Assistant Manager of Customer Insights at Emeritus. How Microsoft, Sweetgreen, and Chubbies are using AI interviews to build better products The speed advantage has proven central to Listen's pitch. Traditional customer research at Microsoft could take four to six weeks to generate insights. "By the time we get to them, either the decision has been made or we lose out on the opportunity to actually influence it," said Romani Patel, Senior Research Manager at Microsoft. With Listen, Microsoft can now get insights in days, and in many cases, within hours. The platform has already powered several high-profile initiatives. Microsoft used Listen Labs to collect global customer stories for its 50th anniversary celebration. "We wanted users to share how Copilot is empowering them to bring their best self forward," Patel said, "and we were able to collect those user video stories within a day." Traditionally, that kind of work would have taken six to eight weeks. Simple Modern, an Oklahoma-based drinkware company, used Listen to test a new product concept. The process took about an hour to write questions, an hour to launch the study, and 2.5 hours to receive feedback from 120 people across the country. "We went from 'Should we even have this product?' to 'How should we launch it?'" said Chris Hoyle, the company's Chief Marketing Officer. Chubbies, the shorts brand, achieved a 24x increase in youth research participation—growing from 5 to 120 participants — by using Listen to overcome the scheduling challenges of traditional focus groups with children. "There's school, sports, dinner, and homework," explained Lauren Neville, Director of Insights and Innovation. "I had to find a way to hear from them that fit into their schedules." The company also discovered product issues through AI interviews that might have gone undetected otherwise. Wahlforss described how the AI "through conversations, realized there were like issues with the the kids short line, and decided to, like, interview hundreds of kids. And I understand that there were issues in the liner of the shorts and that they were, like, scratchy, quote, unquote, according to the people interviewed." The redesigned product became "a blockbuster hit." The Jevons paradox explains why cheaper research creates more demand, not less Listen Labs is entering a massive but fragmented market. Wahlforss cited research from Andreessen Horowitz estimating the market research industry at roughly $140 billion annually, populated by legacy players — some with more than a billion dollars in revenue — that he believes are vulnerable to disruption. "There are very much existing budget lines that we are replacing," Wahlforss said. "Why we're replacing them is that one, they're super costly. Two, they're kind of stuck in this old paradigm of choosing between a survey or interview, and they also take months to work with." But the more intriguing dynamic may be that AI-powered research doesn't just replace existing spending — it creates new demand. Wahlforss invoked the Jevons paradox, an economic principle that occurs when technological advancements make a resource more efficient to use, but increased efficiency leads to increased overall consumption rather than decreased consumption. "What I've noticed is that as something gets cheaper, you don't need less of it. You want more of it," Wahlforss explained. "There's infinite demand for customer understanding.

Source: VentureBeat AI When the creator of the world's most advanced coding agent speaks, Silicon Valley doesn't just listen — it takes notes. For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup. "If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment." The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding — a shift from typing syntax to commanding autonomous units. Here is an analysis of the workflow that is reshaping how software gets built, straight from the architect himself. How running five AI agents at once turns coding into a real-time strategy game The most striking revelation from Cherny's disclosure is that he does not code in a linear fashion. In the traditional "inner loop" of development, a programmer writes a function, tests it, and moves to the next. Cherny, however, acts as a fleet commander. "I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." By utilizing iTerm2 system notifications, Cherny effectively manages five simultaneous work streams. While one agent runs a test suite, another refactors a legacy module, and a third drafts documentation. He also runs "5-10 Claudes on claude.ai" in his browser, using a "teleport" command to hand off sessions between the web and his local machine. This validates the "do more with less" strategy articulated by Anthropic President Daniela Amodei earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of existing models can yield exponential productivity gains. The counterintuitive case for choosing the slowest, smartest model In a surprising move for an industry obsessed with latency, Cherny revealed that he exclusively uses Anthropic's heaviest, slowest model: Opus 4.5. "I use Opus 4.5 with thinking for everything," Cherny explained. "It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end." For enterprise technology leaders, this is a critical insight. The bottleneck in modern AI development isn't the generation speed of the token; it is the human time spent correcting the AI's mistakes. Cherny's workflow suggests that paying the "compute tax" for a smarter model upfront eliminates the "correction tax" later. One shared file turns every AI mistake into a permanent lesson Cherny also detailed how his team solves the problem of AI amnesia. Standard large language models do not "remember" a company's specific coding style or architectural decisions from one session to the next. To address this, Cherny's team maintains a single file named CLAUDE.md in their git repository. "Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time," he wrote.

Source: Hugging Face Blog Back to Articles Setup and Install Claude Code Codex Gemini CLI Connect to Hugging Face Your First Training Run Instruct the coding agent to fine tune Review Before Submitting Track Progress Use Your Model Training Methods Supervised Fine-Tuning (SFT) Direct Preference Optimization (DPO) Group Relative Policy Optimization (GRPO) Hardware and Cost Model Size to GPU Mapping Demo vs Production Dataset Validation Monitoring Training Converting to GGUF What's Next Resources We gave Claude the ability to fine-tune language models using a new tool called Hugging Face Skills. Not just write training scripts, but to actually submit jobs to cloud GPUs, monitor progress, and push finished models to the Hugging Face Hub. This tutorial shows you how it works and how to use it yourself. Claude Code can use "skills"—packaged instructions, scripts, and domain knowledge—to accomplish specialized tasks. The hf-llm-trainer skill teaches Claude everything it needs to know about training: which GPU to pick for your model size, how to configure Hub authentication, when to use LoRA versus full fine-tuning, and how to handle the dozens of other decisions that go into a successful training run. With this skill, you can tell Claude things like: Fine-tune Qwen3-0.6B on the dataset open-r1/codeforces-cots And Claude will: Validate your dataset format Select appro

Source: Siecle Digital Aux États-Unis, Anthropic vient d’ouvrir un bras de fer inédit avec le ministère de la Défense, qui montre que l’intelligence artificielle s’invite désormais au coeur des arbitrages géopolitiques. En effet, comme le rapporte Le Figaro, l’administration Trump demande d’autoriser une utilisation sans restriction de Claude, son modèle d’IA, par le Pentagone. Une requête que l’entreprise […]

Source: Hugging Face Blog Back to Articles TL;DR Table-of-Contents Datasets: Ready for the Next Wave of Large-Scale Robot Learning What's New in Datasets v3.0? New Feature: Dataset Editing Tools! Simulation Environments: Expanding Your Training Grounds LIBERO Support Meta-World Integration Codebase: Powerful Tools For Everyone The New Pipeline for Data Processing Multi-GPU Training Made Easy Policies: Unleashing Open-World Generalization PI0 and PI0.5 GR00T N1.5 Robots: A New Era of Hardware Integration with the Plugin System Key Benefits Reachy 2 Integration Phone Integration The Hugging Face Robot Learning Course Deep Dive: The Modern Robot Learning Tutorial Final thoughts from the team We're thrilled to announce a series of significant advancements across LeRobot, designed to make open-source robot learning more powerful, scalable, and user-friendly than ever before! From revamped datasets to versatile editing tools, new simulation environments, and a groundbreaking plugin system for hardware, LeRobot is continuously evolving to meet the demands of cutting-edge embodied AI. TL;DR LeRobot v0.4.0 delivers a major upgrade for open-source robotics, introducing scalable Datasets v3.0, powerful new VLA models like PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. The release also adds support for LI

Source: Hugging Face Blog Back to Articles Why this matters How the collaboration works Benefits for the community Join us We’re excited to announce a new collaboration between Hugging Face and VirusTotal, the world’s leading threat-intelligence and malware analysis platform. This collaboration enhances the security of files shared across the Hugging Face Hub, helping protect the machine learning community from malicious or compromised assets. As of today HF Hub hosts 2.2 Million Public model artifacts. As we continue to grow into the world’s largest open platform for Machine Learning models and datasets, ensuring that shared assets remain safe is essential. Threats can take many forms: Malicious payloads disguised as model files or archives Files that have been compromised before upload Binary assets linked to known malware campaigns Dependencies or serialized objects that execute unsafe code when loaded By collaborating with VirusTotal, we’re adding an extra layer of protection and visibility by enabling files shared through H

Source: Hugging Face Blog Back to Articles Intel and Hugging Face collaborated to demonstrate the real-world value of upgrading to Google’s latest C4 Virtual Machine (VM) running on Intel Xeon 6 processors (codenamed Granite Rapids (GNR)). We specifically wanted to benchmark improvements in the text generation performance of OpenAI GPT OSS Large Language Model(LLM). The results are in, and they are impressive, demonstrating a 1.7x improvement in Total Cost of Ownership(TCO) over the previous-generation Google C3 VM instances. The Google Cloud C4 VM instance further resulted in: 1.4x to 1.7x TPOT throughput/vCPU/dollar Lower price per hour over C3 VM Introduction GPT OSS is a common name for an open-source Mixture of Experts (MoE) model released by OpenAI. An MoE model is a deep neural network architecture that uses specialized “expert” sub-networks and a “gating network” to decide which experts to use for a given input. MoE models allow you to scale your model capacity efficiently without linearly scaling compute costs. They also allow for specialization, where different “experts” learn different skills, allowing them to adapt to diverse data distributions. Even with very large parameters, only a small subset of experts is activated per token, making CPU inference viable. Intel and Hugging Face collaborated to merge an expert execution optimization (PR #40304)

Source: Hugging Face Blog Back to Articles Accessing SAIR 1. Install essentials 2. Authenticate 3. Load the main table (sair.parquet) 4. (Optional) List available structure archives 5. (Optional) Download and extract structures Questions? This summer, SandboxAQ released the Structurally Augmented IC50 Repository (SAIR), the largest dataset of co-folded 3D protein-ligand structures paired with experimentally measured IC₅₀ labels, directly linking molecular structure to drug potency and overcoming a longstanding scarcity in training data. This dataset is now available on Hugging Face, and for the first time, researchers have open access to more than 5 million AI‑generated, high‑accuracy protein-ligand 3D structures, each paired with validated empirical binding potency data. SAIR is an open-sourced dataset and is publicly available for free under a permissive CC BY 4.0 license, making it immediately actionable for commercial and non-commercial R&D pipelines. More than just a dataset, SAIR is a strategic asset that bridges the long-standing data gap in AI-powered drug design. It empowers pharmaceutical, biotech, and tech‑bio leaders to accelerate R&D, expand target horizons, and supercharge AI models – moving more of the costly, lengthy drug design and optimization from the wet lab to in silico. This means shorter hit‑to‑lead timelines, more efficient lead optimization, fewer

Source: Hugging Face Blog Back to Articles Core Stack: Model Choices and Technical Innovations Deep Reasoning with Llama Nemotron Evaluation: Transparency and Robustness in Metrics Benchmark Results: DeepResearch Bench For the Hugging Face Developer Community Takeaways Contributors: David Austin, Raja Biswas, Gilberto Titericz Junior, NVIDIA

Source: Siecle Digital Si l’Europe veut s’imposer comme un terrain stratégique pour les géants de l’intelligence artificielle, entre les ambitions politiques, les talents et les infrastructures technologiques, les capitales rivalisent pour attirer les laboratoires de recherche les plus avancés. Comme le rapporte Reuters, OpenAI a décidé de transformer son bureau londonien en son plus grand centre de recherche […]

Source: Siecle Digital Avec une nouvelle mise à jour annoncée par Google, Gemini change de dimension sur Android, l’intelligence artificielle devient un véritable agent capable d’agir directement dans les applications. Une évolution qui esquisse le futur de l’assistance mobile, où l’IA ne se contente plus de répondre, mais exécute. Cette fonctionnalité marque une étape supplémentaire dans la stratégie […]

Source: Towards Data Science Utilizing feature stores like Feast and distributed compute frameworks like Ray in production machine learning systems

Source: Towards Data Science A deep dive into the Sharpness-Aware-Minimization (SAM) algorithm and how it improves the generalizability of modern deep learning models

Source: Hugging Face Blog Back to Articles What are agent skills? 1. Get the teacher (Claude Opus 4.5) to build a kernel 2. Make an agent skill from the trace 3. Take your skill to an open source, smaller, or cheaper model Deep dive tutorial into building kernels with agent skills Setup and Install Skill Generation Generate the Skill Evaluate on a Different Model How the evaluation in upskill works What's Next Resources The best thing about agent skills is upskilling your agents on hard problems. There are two ways to look at that: You can take Opus 4.5 or other SOTA models and tackle the hardest problems out there. You can take models that run on your laptop and upskill them to harder problems. In this blog post, we’ll show you how to take on the latter. This blog post walks through the process of using a new tool, upskill, to generate and evaluate agent skills with large models and use them with smaller models. We will benchmark upskill on the task of writing CUDA kernels for diffusers models, but the process is generally useful for cutting costs, or using smaller models on hard and domain-specific problems. What are agent skills? In case you missed it, agent skills are taking the coding agent game by storm. In fact, they’re a straightforward concept to define model context as files, like instructions as markdown and code as scripts. The file for

Source: Hugging Face Blog Back to Articles The Seeds of China’s Organic Open Source AI Ecosystem DeepSeek R1: A Turning Point From DeepSeek to AI+: Strategic Realignmentt Global Reception and Response This is the first blog in a series that will examine China’s open source community’s historical advancements in the past year and its reverberations in shaping the entire ecosystem. Much of 2025’s progress can be traced back to January’s “DeepSeek Moment”, when Hangzhou-based AI company DeepSeek released their R-1 model. The first blog addresses strategic changes and the explosion of new open models and open source players. The second covers architectural and hardware choices largely by Chinese companies made in the wake of a growing open ecosystem, available here. The third analyzes prominent organizations’ trajectories and the future of the global open source ecosystem, available here. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, there has never been a better time to build and release open models and artifacts, as proven by the past year’s immense growth catalyzed by DeepSeek. Notably, geopolitics has driven adoption; while models developed in China have been dominating across metrics throughout 2025 and new players leapfrogging each other, Western AI communities are see

Source: Hugging Face Blog Back to Articles Discover more in our official blogpost, featuring an interactive experience The journey of building world-class Arabic language models has been one of continuous learning and iteration. Today, we're excited to announce Falcon-H1-Arabic, our most advanced Arabic language model family to date, representing a significant leap forward in both architecture and capabilities. This release embodies months of research, community feedback, and technical innovation, culminating in three powerful models that set new standards for Arabic natural language processing. Building on Success: The Evolution from Falcon-Arabic When we launched Falcon-Arabic a few months ago, the response from the community was both humbling and enlightening. Developers, researchers and students across the Arab world used the model for real use cases, pushing them to its limits and providing invaluable feedback. We learned where the model excelled and, more importantly, where it struggled. Long-context understanding, dialectal variations, mathematical reasoning, and domain-specific knowledge emerged as key areas requiring deeper attention. We didn't just want to make incremental improvements, we wanted to fundamentally rethink our approach. The result is Falcon-H1-Arabic, a model family that addresses every piece of feedback we received while

Source: Hugging Face Blog Back to Articles Looking to show off your robotics aptitude? The AMD Open Robotics Hackathon hosted by AMD, Hugging Face, and Data Monsters is the place to do it. Whether you’re a student, hobbyist, startup founder, or seasoned engineer, this event brings together makers, coders, and roboticists for a fast-paced, hands-on competition that turns bold ideas into functioning demos. The first of two in-person hackathons will take place from December 5-7, 2025 in Tokyo Japan. Our next stop will be in Paris France from December 12-14, 2025. Preparing for the Hackathon: Form a team of up to four roboticists (ages 18+) to take on two missions over the course of 3 days. Mission 1 — An instructor-led exploration and preparation session. Learn how to set up the LeRobot development environment using AMD AI solutions Mission 2 — Build your own creative solution to a real-world problem. Your team has two days to develop an innovative freestyle project using LeRobot technical proficiency: • Strong Linux development skills and experience with Python and related tooling and containerization • Machine learning skills, familiarity with PyTorch, and hands-on experience with model training and inference • Bonus if your team has experience with ROCm, LeRobot, and embedded development. Hardware will be provided to contestants in the form of SO-101 robotics kits, AMD Ryz

Source: Hugging Face Blog Back to Articles Summary The State of Global Compute The Beginning of a Rewiring The Reaction: Powering Chinese AI How China’s Compute Landscape Catalyzed the Cambrian Explosion of Open Models Advances in Compute-Constrained Environments Pushing the Technical Frontier The Aftermath: Hardware, Software and Soft Power From Sufficient to Demanded Domestic Synergy A New Software Landscape Looking Ahead Acknowledgements Appendix: A Timeline of Chip Usage and Controls Summary The status quo of AI chip usage, that was once almost entirely U.S.-based, is changing. China’s immense progress in open-weight AI development is now being met with rapid domestic AI chip development. In the past few months, highly performant open-weight AI models’ inference in China has started to be powered by chips such as Huawei’s Ascend and Cambricon, with some models starting to be trained using domestic chips. There are two large implications for policymakers and AI researchers and developers respectively: U.S. export controls correlates with expedited Chinese chip production, and chip scarcity in China likely incentivized many of the innovations that are open-sourced and shaping global AI development. China’s chip development correlates highly with stronger export controls from the U.S. Under uncertainty of chip access, Chinese companies have innovated wit

Source: Hugging Face Blog Back to Articles A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware SO-ARM Starter Workflow; Building an Embodied Surgical Assistant Technical Implementation Sim-to-Real Mixed Training Approach Hardware Requirements Data Collection Implementation Simulation Teleoperation Controls Model Training Pipeline End-to-End Sim Collect–Train–Eval Pipelines Generate Synthetic Data in Simulation Train and Evaluate Policies Convert Models to TensorRT Getting Started Resources A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With new advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments - reducing prototyping time from months to days, improving model accuracy, and enabling safer, faster innovation before a single device reaches the operating room. That's why NVIDIA introduced Isaac for Healthcare earlier this year, a developer framework for AI healthcare robotics, that enables develope

Source: Hugging Face Blog Back to Articles The Story Behind the Library The Foundation Years (2020-2021) The Great Shift: Git to HTTP (2022) An Expanding API Surface (2022–2024) Ready. Xet. Go! (2024-2025) Measuring Growth and Impact Building for the Next Decade Modern HTTP Infrastructure with httpx and hf_xet Agents Made Simple with MCP and Tiny-Agents A Fully-Featured CLI for Modern Workflows Cleaning House for the Future The Migration Guide Acknowledgments TL;DR: After five years of development, huggingface_hub has reached v1.0 - a milestone that marks the library's maturity as the Python package powering 200,000 dependent libraries and providing core functionality for accessing over 2 million public models, 0.5 million public datasets, and 1 million public Spaces. This release introduces breaking changes designed to support the next decade of open machine learning, driven by a global community of almost 300 contributors and millions of users. We highly recommend upgrading to v1.0 to benefit from major performance improvements and new capabilities. pip install --upgrade huggingface_hub Major changes in this release include the migration to httpx as the backend library, a completely redesigned hf CLI (which replaces the deprecated huggingface-cli) featuring a Typer-based interface with a significantly expanded feature set, and full adoption of hf_xet for file t

Source: Hugging Face Blog Back to Articles With the growing capability of large language models (LLMs), a new class of models has emerged: Vision Language Models (VLMs). These models can analyze images and videos to describe scenes, create captions, and answer questions about visual content. While running AI models on your own device can be difficult as these models are often computationally demanding, it also offers significant benefits: including improved privacy since your data stays on your machine, and enhanced speed and reliability because you're not dependent on an internet connection or external servers. This is where tools like Optimum Intel and OpenVINO come in, along with a small, efficient model like SmolVLM. In this blog post, we'll walk you through three easy steps to get a VLM running locally, with no expensive hardware or GPUs required (though you can run all the code samples from this blog post on Intel GPUs). Deploy your model with Optimum Small models like SmolVLM are built for low-resource consumption, but they can be further optimized. In this blog post we will see how to optimize your model, to lower memory usage and speedup inference, making it more efficient for deployment on devices with limited resources. To follow this tutorial, you need to install optimum and openvino, which you can do with: pip install optimum-intel[openvino] transf

Source: Hugging Face Blog Back to Articles TL;DR Training Data Training Recipe and Novel Components Architecture Three-Phase Training Approach Novel Training Techniques Results Natural Language Understanding (NLU) Retrieval Performance Learning Languages in the Decay Phase Efficiency Improvements Usage Examples Fine-tuning Examples Encoders Model Family and Links TL;DR This blog post introduces mmBERT, a state-of-the-art massively multilingual encoder model trained on 3T+ tokens of text in over 1800 languages. It shows significant performance and speed improvements over previous multilingual models, being the first to improve upon XLM-R, while also developing new strategies for effectively learning low-resource languages. mmBERT builds upon ModernBERT for a blazingly fast architecture, and adds novel components to enable efficient multilingual learning. If you are interested in trying out the models yourself, some example boilerplate is available at the end of this blogpost! Training Data Figure 1: the training data is progressively annealed to include more languages and more uniform sampling throughout training. mmBERT was trained on a carefully curated multilingual dataset totaling over 3T tokens across three distinct training phases. The foundation of our training data consists of three primary open-source and high

Source: Hugging Face Blog Back to Articles A slimmed-down training pipeline from Kimina Prover, with core features and full compatibility with verl. We are happy to introduce kimina-prover-rl, an open-source training pipeline for formal theorem proving in Lean 4, based on a structured reasoning-then-generation paradigm inspired by DeepSeek-R1. This training pipelinee is a simplified version of the system we used to train Kimina Prover, preserving the key components of the system and offering full compatibility with the open-source Verl framework. It is released as part of a fork of Verl containing the complete training recipe in recipe/kimina-prover-rl, allowing anyone to reproduce our experiments or adapt the setup to their own models and datasets. All information to setup and launch the pipeline can be found in the README of the recipe. As a result of this training pipeline, we are releasing two models: AI-MO/Kimina-Prover-RL-1.7B, a 1.7B-parameter model that achieves 76.63% Pass@32 on the MiniF2F benchmark — setting a new state of the art for open-source models in this size category AI-MO/Kimina-Prover-RL-0.6B, a 0.6B-parameter model that achieves 71.30% Pass@32 on the MiniF2F benchmark — also setting a new state of the art for open-source models in this size category. Introduction kimina-prover-rl i

Source: Hugging Face Blog Back to Articles Elevated by machine learning Learn about our NSS Model How we trained the model Get started experimenting with NSS today! Neural Super Sampling (NSS), a next-generation AI-powered upscaling solution from Arm is released for graphics and gaming developers to start experimenting today! Elevated by machine learning NSS is designed for real-time performance on future mobile devices with Arm Neural Technology. However, latency depends on implementation factors such as GPU configuration, resolution, and use case. In our Enchanted Castle demo video below, NSS reduced GPU workload by 50 percent. The model rendered at 540p and upscaled to 1080p in just 4ms in sustained performance setup. Your browser does not support the video tag. Learn about our NSS Model Neural Super Sampling (NSS) is a parameter prediction model for real-time temporal super sampling developed by Arm, optimized for execution on Neural Accelerators (NX) in mobile GPUs. It enables high-resolution rendering at a lower compute cost by reconstructing high-quality output frames from low-resolution temporal inputs. NSS is particularly suited for mobile gaming, XR, and other power-constrained graphics use cases. Get started with our NSS model today. If you want to go deeper check out the following resources: Technical Blog: How Neural Super Sampl

Source: Siecle Digital Alors que de nombreuses entreprises multiplient les expérimentations autour de l’intelligence artificielle, le passage à l’échelle reste encore complexe, comme en témoigne une récente étude du MIT. Entre les projets avortés qui s’enchaînent et les déploiements réellement industrialisés, le fossé est bien réel. Face à ce constat, les éditeurs d’IA cherchent de nouveaux relais de […]

Source: Numerama Tech Dans le film culte Wargames, un supercalculateur menaçait de lancer une guerre nucléaire. En 2026, la réalité dresse un constat tout aussi plus inquiétant : placées aux commandes de simulations géopolitiques, les intelligences artificielles de pointe comme GPT-5.2 ou Gemini 3 Flash choisissent l'escalade atomique dans 95 % des cas.

Source: Towards Data Science A practical guide to identifying, restoring, and transforming elements within your images

Source: Towards Data Science Have you ever wondered what happens when you apply a filter in a DAX expression? Well, Today I will take you on a deep dive into this fascinating topic, with examples to help you learn something new and surprising.

Source: Siecle Digital La technologie n’est plus le refuge automatique qu’elle était devenue. Après plusieurs années d’euphorie, portées par la promesse d’une intelligence artificielle capable de transformer tous les secteurs, les marchés ont brutalement changé de ton récemment. L’IA, hier moteur incontesté de croissance, est soudain apparue comme une menace potentielle pour une partie des modèles économiques des […]

Source: Towards Data Science Understanding the foundational distortion of digital audio from first principles, with worked examples and visual intuition

Source: Towards Data Science Inside the research that shows algorithmic price-fixing isn't a bug in the code. It's a feature of the math.

Source: Towards Data Science Use Claude Code to quickly build completely personalized applications

Source: Hugging Face Blog Back to Articles This blog post covers how to use Unsloth and Hugging Face Jobs for fast LLM fine-tuning (specifically LiquidAI/LFM2.5-1.2B-Instruct ) through coding agents like Claude Code and Codex. Unsloth provides ~2x faster training and ~60% less VRAM usage compared to standard methods, so training small models can cost just a few dollars. Why a small model? Small language models like LFM2.5-1.2B-Instruct are ideal candidates for fine-tuning. They are cheap to train, fast to iterate on, and increasingly competitive with much larger models on focused tasks. LFM2.5-1.2B-Instruct runs under 1GB of memory and is optimized for on-device deployment, so what you fine-tune can be served on CPUs, phones, and laptops. You will need We are giving away free credits to fine-tune models on Hugging Face Jobs. Join the Unsloth Jobs Explorers organization to claim your free credits and one-month Pro subscription. A Hugging Face account (required for HF Jobs) Billing setup (for verification, you can monitor your usage and manage your billing in your billing page). A Hugging Face token with write permissions (optional) A coding agent (Open Code, Claude Code, or Codex) Run the Job If you want to train a model using HF Jobs and Unsloth, you can simply use the hf jobs CLI to submit a job. First, you need to install the hf CLI.

Source: Hugging Face Blog Back to Articles tl;dr: We built an agent skill that teaches coding agents how to write production CUDA kernels. Then we pointed Claude and Codex at two real targets: a diffusers pipeline and a transformers model. The agents produced working kernels for both, with correct PyTorch bindings and benchmarks, end to end. Writing CUDA kernels is hard. Writing CUDA kernels that correctly integrate with transformers and diffusers is harder. There are architecture-specific memory access patterns, vectorization strategies, warp shuffle reductions, and a dozen integration pitfalls that trip up even experienced developers. It is exactly the kind of specialized, high-stakes problem where agent skills shine. We gave coding agents the domain knowledge they need, like which GPU architecture to target, how to structure a kernel-builder project, when to use shared memory versus registers, and how to write PyTorch bindings. The agents did the rest. If you have used the LLM training skill or read We Got Claude to Teach Open Models, the pattern will feel familiar: package domain expertise into a skill, point the agent at a problem, and let it work. Why a skill for kernels? The Kernel Hub solved the distribution of custom hardware kernels. You can load pre-compiled kernels from the Hub with a single get_kernel call. No builds, no flags. However, someone still

Source: Hugging Face Blog Back to Articles What Is OpenEnv? The Calendar Gym: A Production-Grade Benchmark What We Learned Looking Ahead Appendix: Common error cases in tool use Specific error cases found in the wild AI agents often perform impressively in controlled research settings, yet struggle when deployed in real-world systems where they must reason across multiple steps, interact with real tools and APIs, operate under partial information, and recover from errors in stateful, permissioned environments—highlighting a persistent gap between research success and production reliability. OpenEnv is an open-source framework from Meta and Hugging Face designed to address this challenge by standardizing how agents interact with real environments. As part of this collaboration, Turing contributed a production-grade calendar management environment to study tool-using agents under realistic constraints such as access control, temporal reasoning, and multi-agent coordination. In this post, we explore how OpenEnv works in practice, why calendars serve as a powerful benchmark for real-world agent evaluation, and what our findings reveal about the current limitations of tool-using agents. What Is OpenEnv? OpenEnv is a framework for evaluating AI agents against real systems rather than simulations. It provides a standardized way to connect agents to real tools and

Source: Hugging Face Blog Back to Articles Step 1: Configure the data source Step 2: Build the flow visually Step 3: Review and run See it in action! Running Existing Workflows Run the Glaive Code Assistant workflow Get started SyGra 2.0.0 introduces Studio, an interactive environment that turns synthetic data generation into a transparent, visual craft. Instead of juggling YAML files and terminals, you compose flows directly on the canvas, preview datasets before committing, tune prompts with inline variable hints, and watch executions stream live—all from a single pane. Under the hood it’s the same platform, so everything you do visually generates the corresponding SyGra compatible graph config and task executor scripts. What Studio lets you do Configure and validate models with guided forms (OpenAI, Azure OpenAI, Ollama, Vertex, Bedrock, vLLM, custom endpoints). Connect Hugging Face, file-system, or ServiceNow data sources and preview rows before execution. Configure nodes by selecting models, writing prompts (with auto-suggested variables), and defining outputs or structured schemas. Design downstream outputs using shared state variables and Pydantic-powered mappings.

Source: Hugging Face Blog Back to Articles Evaluation is broken What We're Shipping Why This Matters Get Started TL;DR: Benchmark datasets on Hugging Face can now host leaderboards. Models store their own eval scores. Everything links together. The community can submit results via PR. Verified badges prove that the results can be reproduced. Evaluation is broken Let's be real about where we are with evals in 2026. MMLU is saturated above 91%. GSM8K hit 94%+. HumanEval is conquered. Yet some models that ace benchmarks still can't reliably browse the web, write production code, or handle multi-step tasks without hallucinating, based on usage reports. There is a clear gap between benchmark scores and real-world performance. Furthermore, there is another gap within reported benchmark scores. Multiple sources report different results. From Model Cards, to papers, to evaluation platforms, there is no alignment in reported scores. The result is that the community lacks a single source of truth. What We're Shipping Decentralized and transparent evaluation reporting. We are going to take evaluations on the Hugging Face Hub in a new direction by decentralizing reporting and allowing the entire community to openly report scores for benchmarks. At first, we will start with a shortlist of 4 benchmarks and over time we’ll expand to the most relev

Source: Hugging Face Blog The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

Source: Hugging Face Blog Back to Articles It has become increasingly challenging to assess whether a model’s reported improvements reflect genuine advances or variations in evaluation conditions, dataset composition, or training data that mirrors benchmark tasks. The NVIDIA Nemotron approach to openness addresses this by publishing transparent and reproducible evaluation recipes that make results independently verifiable. NVIDIA released Nemotron 3 Nano 30B A3B with an explicitly open evaluation approach to make that distinction clear. Alongside the model card, we are publishing the complete evaluation recipe used to generate the results, built with the NVIDIA NeMo Evaluator library, so anyone can rerun the evaluation pipeline, inspect the artifacts, and analyze the outcomes independently. We believe that open innovation is the foundation of AI progress. This level of transparency matters because most model evaluations omit critical details. Configs, prompts, harness versions, runtime settings, and logs are often missing or underspecified, and even small differences in these parameters can materially change results. Without a complete recipe, it’s nearly impossible to tell whether a model is genuinely more intelligent or simply optimized for a benchmark. This blog shows developers exactly how to reproduce the evaluation behind Nemotron 3 Nano 30B A3B usin

Source: Hugging Face Blog Back to Articles Introduction Introduction What is CUGA? Open Source and Open Models Integration with Langflow: Visual Agent Design Made Simple Try the Hugging Face Demo: A Hands-On Preview Conclusion and Call to Action AI agents are rapidly becoming essential for building intelligent applications, but creating robust, adaptable agents that scale across domains remains a challenge. Many existing frameworks struggle with brittleness, tool misuse, and failures when faced with complex workflows. CUGA (Configurable Generalist Agent) was designed to overcome these limitations. It's an open-source, AI Agent that combines flexibility, reliability, and ease of use for enterprise use cases. By abstracting orchestration complexity, CUGA empowers developers to focus on domain requirements rather than the internals of agent building. And now, with its integration into Hugging Face Spaces, experimenting with CUGA and open models has never been easier. What is CUGA? CUGA is a configurable, general-purpose AI agent that supports complex, multi-step tasks across web and API environments. It has achieved state-of-the-art performance on leading benchmarks: #1 on AppWorld - a benchmark with 750 real-world tasks across 457 APIs Top-tier on WebArena (#1 from 02/25 - 09/25) - showcases CUGA Computer Use capabilities with a compl

Source: Hugging Face Blog Back to Articles The Solution Why Foundation Models as the Base API Package Traits: Include Only What You Need Image Support (and API Design Trade-offs) Try It Out: chat-ui-swift What's Next Get Involved Links LLMs have become essential tools for building software. But for Apple developers, integrating them remains unnecessarily painful. Developers building AI-powered apps typically take a hybrid approach, adopting some combination of: Local models using Core ML or MLX for privacy and offline capability Cloud providers like OpenAI or Anthropic for frontier capabilities Apple's Foundation Models as a system-level fallback Each comes with different APIs, different requirements, different integration patterns. It's a lot, and it adds up quickly. When I interviewed developers about building AI-powered apps, friction with model integration came up immediately. One developer put it bluntly: I thought I'd quickly use the demo for a test and maybe a quick and dirty build but instead wasted so much time. Drove me nuts. The cost to experiment is high, which discourages developers from discovering that local, open-source models might actually work great for their use case. Today we're announcing AnyLanguageModel, a Swift package that provides a drop-in replacement for Apple's Foundation Models framework with support for multiple model providers. Our goal is to reduc

Source: Hugging Face Blog Back to Articles TLDR Streaming: The Same Easy API The Challenge: Streaming at Scale Under the Hood: What We Improved How are we faster than plain S3: Xet Need a custom streaming pipeline ? Push streaming to the limit Get Started and See the Difference TLDR We boosted load_dataset('dataset', streaming=True), streaming datasets without downloading them with one line of code! Start training on multi-TB datasets immediately, without complex setups, downloading, no "disk out of space", or 429 “stop requesting!” errors.It's super fast! Outrunning our local SSDs when training on 64xH100 with 256 workers downloading data. We've improved streaming to have 100x fewer requests, → 10× faster data resolution → 2x sample/sec, → 0 worker crashes at 256 concurrent workers. Loading data, especially at the terabyte scale, is a major pain in any machine learning workflow. We suffered this while training SmolLM3, at one point we had to wait 3 hours before each run to download enough data. Streaming has always been possible in the datasets library, but large scale training with massive datasets remained a challenge. That changes today . We spent a few months improving the backend, focusing on streaming datasets to make it faster and more efficient. What did we do exactly? Streaming: The Same Easy API First things first: our

Source: Hugging Face Blog Back to Articles Project History Acknowledgements Getting Started Today, we are announcing that Sentence Transformers is transitioning from Iryna Gurevych’s Ubiquitous Knowledge Processing (UKP) Lab at the TU Darmstadt to Hugging Face. Hugging Face's Tom Aarsen has already been maintaining the library since late 2023 and will continue to lead the project. At its new home, Sentence Transformers will benefit from Hugging Face's robust infrastructure, including continuous integration and testing, ensuring that it stays up-to-date with the latest advancements in Information Retrieval and Natural Language Processing. Sentence Transformers (a.k.a. SentenceBERT or SBERT) is a popular open-source library for generating high-quality embeddings that capture semantic meaning. Since its inception by Nils Reimers in 2019, Sentence Transformers has been widely adopted by researchers and practitioners for various natural language processing (NLP) tasks, including semantic search, semantic textual similarity, clustering, and paraphrase mining. After years of development and training by and for the community, over 16,000 Sentence Transformers models are publicly available on the Hugging Face Hub, serving more than a million monthly unique users. "Sentence Transformers has been a huge success story and a culmination of our long-standing research on computing semantic similarities

Source: Hugging Face Blog The Wayback Machine is an initiative of the Internet Archive, a 501(c)(3) non-profit, building a digital library of Internet sites and other cultural artifacts in digital form. Other projects include Open Library & archive-it.org. Your use of the Wayback Machine is subject to the Internet Archive's Terms of Use.

Source: Hugging Face Blog Back to Articles Open Data for India's AI Future What’s in the Dataset? How We Built It Data Generation Pipeline Embedded Cultural Context Private By Design Who This Is For Practical AI Applications Why It Matters Start Building with Nemotron-Personas-India A compound AI approach to Indian personas grounded in real-world distributions Open Data for India's AI Future India represents one of the world's largest AI opportunities — with over 700 million internet users, a multitude of languages, and a rapidly growing developer ecosystem. Yet, most open datasets reflect Western norms and English-only contexts, creating a data gap that limits AI adoption in India's multilingual, multi-script environment. Today, we're releasing Nemotron-Personas-India, the first open synthetic dataset of Indic personas aligned to India's real-world demographic, geographic, and cultural distributions. Licensed under CC BY 4.0, this dataset offers a privacy-preserving, regulation-ready foundation for scaling AI systems that reflect Indian society—without relying on sensitive personal data. Built with NeMo Data Designer, NVIDIA's enterprise-grade synthetic data generation microservice, Nemotron-Personas-India extends our global collection of Sovereign AI datasets. It builds on the success of our US and Japan persona datasets and includes

Source: Hugging Face Blog Back to Articles The past year has been all about giving LLMs more tools and autonomy to solve more complex and open ended tasks. The goal of the Jupyter Agent is to give the model the ultimate tool: code execution. A natural way to display multi-step code execution together with reasoning is within a Jupyter Notebook, which consists of code and markdown cells. So we built Jupyter Agent to act as an agent that can execute code directly inside a Jupyter notebook and use this environment to solve data analysis and data science tasks. Think of it like Cursor, but living natively inside your data science workflow.We built a demo of this vision with Qwen-3 Coder, currently one of the strongest coding models. This is a follow-up to our earlier work on jupyter-agent (v1). While large models are starting to show useful behavior, the key question is how we can continue improving them. To this end, we focus on strengthening smaller models to perform well on agentic data science tasks as they currently struggle to compete with the large models. The goal of this project is to build a pipeline to first generate high-quality training data, then fine-tune an existing small model, and finally evaluate whether the model's performance improves on relevant benchmarks. Let’s begin with the last step: selecting a strong benchmark for evaluating models on data science tasks.

Source: Hugging Face Blog Back to Articles Research Discovery: Three Layers of Abstraction 1. Manual Research 2. Scripted Tools 3. MCP Integration Setup and Usage Quick Setup Learn More Academic research involves frequent research discovery: finding papers, code, related models and datasets. This typically means switching between platforms like arXiv, GitHub, and Hugging Face, manually piecing together connections. The Model Context Protocol (MCP) is a standard that allows agentic models to communicate with external tools and data sources. For research discovery, this means AI can use research tools through natural language requests, automating platform switching and cross-referencing. Research Discovery: Three Layers of Abstraction Much like software development, research discovery can be framed in terms of layers of abstraction. 1. Manual Research At the lowest level of abstraction, researchers search manually and cross-reference by hand.

Typical workflow:

  1. Find paper on arXiv
  2. Search GitHub for implementations
  3. Check Hugging Face for models/datasets
  4. Cross-reference authors and citations

Source: Hugging Face Blog Back to Articles TextQuests Evaluations Discussion Citations The rapid advancement of Large Language Models (LLMs) has enabled remarkable progress on established academic and industrial benchmarks. Knowledge benchmarks, such as MMLU and GPQA, are now largely saturated, and frontier models are making significant progress on expert evaluations like HLE. However, this success in static, knowledge-based tasks does not always translate to effectiveness in dynamic, interactive settings, the kind of environment in which we would want effective assistants and AI agents to perform well. Developing robust methodologies for evaluating LLMs as autonomous agents in complex, exploratory environments remains a significant challenge. Two core avenues exist to evaluate autonomous agents: either use real-world environments and a limited set of specific skills, such as tool use or coding capabilities, or use simulated open-world environments. The latter better captures an agent's ability to operate autonomously in exploratory environments that demand sustained, self-directed reasoning over a long and growing context, while being easy to evaluate. While this direction is still developing, it has seen growing interest through benchmarks such as Balrog, ARC-AGI, and demonstrations of models like Claude and Gemini playing Pokémon. Building on this emerging vein of work, we introdu

Source: Hugging Face Blog Back to Articles FilBench What did we learn from FilBench? Finding #1: Although region-specific LLMs still lag behind GPT-4, collecting data to train these models is still a promising direction Finding #2: Filipino translation is still a difficult task for LLMs Finding #3: Open LLMs Remain a Cost-Effective Choice for Filipino Language Tasks Does your LLM work on Philippine Languages? Try it on FilBench! Acknowledgements Citation As large language models (LLMs) become increasingly integrated into our lives, it becomes crucial to assess whether they reflect the nuances and capabilities of specific language communities. For example, Filipinos are among the most active ChatGPT users globally, ranking fourth in ChatGPT traffic (behind the United States, India, and Brazil [1] [2]), but despite this strong usage, we lack a clear understanding of how LLMs perform for their languages, such as Tagalog and Cebuano. Most of the existing evidence is anecdotal, such as screenshots of ChatGPT responding in Filipino as proof that it is fluent. What we need instead is a systematic evaluation of LLM capabilities in Philippine languages. That’s why we developed FilBench: a comprehensive evaluation suite to assess the capabilities of LLMs for Tagalog, Filipino (the standardized form of Tagalog), and Cebuano, on fluency, linguistic and translati

Source: Hugging Face Blog Back to Articles Future of AI Most current AI benchmarks focus on answering questions about the past, either by testing models on existing knowledge (in a static manner, such as HLE or GPQA, or augmented, like BrowseComp or GAIA) or previously solved problems (like PaperBench, DABStep, or most coding evaluations). However, we believe that more valuable AI, and ultimately AGI, will be distinguished by its ability to use this past to forecast interesting aspects of the future, rather than merely reciting old facts. Forecasting future events is a complex and holistic task: it requires sophisticated reasoning, synthesis, weighing probabilities and genuine understanding, rather than pattern matching against or searching existing information. Evaluating models on their ability to predict future outcomes, whether in science, economics, geopolitics, or technology tests the kind of intelligence that creates real-world value. Beyond its inherent importance, this forecasting-based approach also solves many methodological problems faced by current evaluations and benchmarks. Traditional benchmarks that measure accuracy on fixed test sets are inevitably affected by possible data contamination, and without access to the full reproducible training pipeline of a model, it's hard to trust the results. The most serious evaluation efforts now ke

Source: Hugging Face Blog Back to Articles Numina & Kimi Team Figure 1: Performance comparison of theorem proving models on the miniF2F-test dataset. We're excited to announce the release of Kimina-Prover-72B, our state-of-the-art theorem proving model trained with the Kimi k1.5[1] RL pipeline based on Qwen2.5-72B [2]. Alongside it, we are also releasing two distilled variants: Kimina-Prover-Distill-8B and 1.7B (based on Qwen3-8B and Qwen3-1.7B[3] respectively). Our key innovations include: Test-Time Reinforcement Learning Search: A trainable agentic proving framework that enables the model to recursively discover, combine and apply multiple lemmas to construct complex proofs, building on a novel lemma-enabled pattern. Error-Fixing Capability: Kimina-Prover can read and interpret Lean’s error messages and propose targeted fixes, demonstrating significantly higher sample efficiency compared to regenerating proofs from scratch. These advancements enable Kimina-Prover to solve challenging mathematical problems and surpass prior methods. As shown in Figure 1, on the widely used miniF2F benchmark, Kimina-Prover achieves a state-of-the-art pass rate of 92.2%. Introduction We focus on automated theorem proving (ATP) in the Lean 4 language, aiming to automate the construction of formal mat


OpenClaw

Source: Gladys Assistant (Forum) Salut à tous ! Vous avez sûrement entendu parler d’OpenClaw, le framework IA qui a explosé sur GitHub il y a quelques semaines, et qui vient d’être racheté par OpenAI. J’ai fait le test de connecter OpenClaw à Gladys pour tester les possibilités, et c’est vraiment bluffant Je vous en dis plus en vidéo : J'ai laissé l'IA OpenClaw contrôler ma Maison (C'est fou) Note : Je vous déconseille d’installer OpenClaw sur votre serveur Gladys, c’est un logiciel encore en début de vie, qui touche un peu à tout et qui a été pas mal décrié pour ses failles de sécurité. Pour ce test, j’ai déployé OpenClaw sur une VM dans le Cloud pour rester dans un environnement isolé 5 messages - 3 participant(e)s Lire le sujet en entier


Cybersecurity / Cybersécurité

Source: SecurityWeek CISA has released an advisory to warn about four vulnerabilities discovered by a researcher in Gardyn Home and Gardyn Studio.

Source: The Hacker News Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic's Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. "The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing

Source: The Hacker News Cybersecurity researchers have discovered four malicious NuGet packages that are designed to target ASP.NET web application developers to steal sensitive data. The campaign, discovered by Socket, exfiltrates ASP.NET Identity data, including user accounts, role assignments, and permission mappings, as well as manipulates authorization rules to create persistent backdoors in victim applications.

Source: The Hacker News Cybersecurity researchers have disclosed details of a new ClickFix campaign that abuses compromised legitimate sites to deliver a previously undocumented remote access trojan (RAT) called MIMICRAT (aka AstarionRAT). "The campaign demonstrates a high level of operational sophistication: compromised sites spanning multiple industries and geographies serve as delivery infrastructure, a multi-stage

Source: The Hacker News Cybersecurity researchers have discovered what they say is the first Android malware that abuses Gemini, Google's generative artificial intelligence (AI) chatbot, as part of its execution flow and achieves persistence. The malware has been codenamed PromptSpy by ESET. The malware is equipped to capture lockscreen data, block uninstallation efforts, gather device information, take screenshots,

Source: Bleeping Computer The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has released new details about RESURGE, a malicious implant used in zero-day attacks exploiting CVE-2025-0282 to breach Ivanti Connect Secure devices. [...]

Source: The Hacker News Cybersecurity researchers have disclosed details of a malicious Go module that's designed to harvest passwords, create persistent access via SSH, and deliver a Linux backdoor named Rekoobe. The Go module, github[.]com/xinfeisoft/crypto, impersonates the legitimate "golang.org/x/crypto" codebase, but injects malicious code that's responsible for exfiltrating secrets entered via terminal password

Source: The Hacker News Cybersecurity researchers have disclosed details of a new botnet loader called Aeternum C2 that uses a blockchain-based command-and-control (C2) infrastructure to make it resilient to takedown efforts. "Instead of relying on traditional servers or domains for command-and-control, Aeternum stores its instructions on the public Polygon blockchain," Qrator Labs said in a report shared with The

Source: The Hacker News Cybersecurity researchers have disclosed details of a new malicious package discovered on the NuGet Gallery, impersonating a library from financial services firm Stripe in an attempt to target the financial sector. The package, codenamed StripeApi.Net, attempts to masquerade as Stripe.net, a legitimate library from Stripe that has over 75 million downloads. It was uploaded by a user named

Source: The Hacker News The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a recently disclosed vulnerability in FileZen to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation. The vulnerability, tracked as CVE-2026-25108 (CVSS v4 score: 8.7), is a case of operating system (OS) command injection that could allow an authenticated user to execute

Source: The Hacker News Cybersecurity researchers have disclosed details of a new cryptojacking campaign that uses pirated software bundles as lures to deploy a bespoke XMRig miner program on compromised hosts. "Analysis of the recovered dropper, persistence triggers, and mining payload reveals a sophisticated, multi-stage infection prioritizing maximum cryptocurrency mining hashrate, often destabilizing the victim

Source: The Hacker News Security news rarely moves in a straight line. This week, it feels more like a series of sharp turns, some happening quietly in the background, others playing out in public view. The details are different, but the pressure points are familiar. Across devices, cloud services, research labs, and even everyday apps, the line between normal behavior and hidden risk keeps getting thinner. Tools

Source: The Hacker News Cybersecurity researchers have disclosed what they say is an active "Shai-Hulud-like" supply chain worm campaign that has leveraged a cluster of at least 19 malicious npm packages to enable credential harvesting and cryptocurrency key theft. The campaign has been codenamed SANDWORM_MODE by supply chain security company Socket. As with prior Shai-Hulud attack waves, the malicious code embedded

Source: The Hacker News Artificial intelligence (AI) company Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security, is currently available in a limited research preview to Enterprise and Team customers. "It scans codebases for security vulnerabilities and suggests targeted

Source: PortSwigger Research Welcome to the Top 10 Web Hacking Techniques of 2025, the 19th edition of our annual community-powered effort to identify the most innovative must-read web security research published in the last year

Source: PortSwigger Research Nominations are now open for the top 10 new web hacking techniques of 2024! Every year, security researchers from all over the world share their latest findings via blog posts, presentations, PoCs, an

Source: CERT-FR Une vulnérabilité a été découverte dans Stormshield Network Security. Elle permet à un attaquant de provoquer un contournement de la politique de sécurité.

Source: The Hacker News A "coordinated developer-targeting campaign" is using malicious repositories disguised as legitimate Next.js projects and technical assessments to trick victims into executing them and establish persistent access to compromised machines. "The activity aligns with a broader cluster of threats that use job-themed lures to blend into routine developer workflows and increase the likelihood of code

Source: Bleeping Computer American manufacturer of medical devices, UFP Technologies, has disclosed that a cybersecurity incident has compromised its IT systems and data. [...]

Source: The Hacker News SolarWinds has released updates to address four critical security flaws in its Serv-U file transfer software that, if successfully exploited, could result in remote code execution. The vulnerabilities, all rated 9.1 on the CVSS scoring system, are listed below - CVE-2025-40538 - A broken access control vulnerability that allows an attacker to create a system admin user and execute arbitrary

Source: The Hacker News The Iranian hacking group known as MuddyWater (aka Earth Vetala, Mango Sandstorm, and MUDDYCOAST) has targeted several organizations and individuals mainly located across the Middle East and North Africa (MENA) region as part of a new campaign codenamed Operation Olalampo. The activity, first observed on January 26, 2026, has resulted in the deployment of new malware families that share

Source: The Hacker News With $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling, four new AI certifications and Certified CISO v4 help close the gap between AI adoption and workforce readiness. EC-Council, creator of the world-renowned Certified Ethical Hacker (CEH) credential and a global leader in applied cybersecurity education, today launched its Enterprise AI Credential Suite,

Source: The Hacker News The cyber threat space doesn’t pause, and this week makes that clear. New risks, new tactics, and new security gaps are showing up across platforms, tools, and industries — often all at the same time. Some developments are headline-level. Others sit in the background but carry long-term impact. Together, they shape how defenders need to think about exposure, response, and preparedness right now

Source: Krebs on Security Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six "zero-day" vulnerabilities that attackers are already exploiting in the wild.

Source: Krebs on Security A new Internet-of-Things botnet called Kimwolf has spread to more than 2 million devices, forcing infected systems to participate in massive distributed denial-of-service (DDoS) attacks and to relay other malicious and abusive Internet traffic. Kimwolf's ability to scan the local networks of compromised systems for other IoT devices to infect makes it a sobering threat to organizations, and new research reveals Kimwolf is surprisingly prevalent in government and corporate networks.

Source: Krebs on Security Our first story of 2026 revealed how a destructive new botnet called Kimwolf rapidly grew to infect more than two million devices by mass-compromising a vast number of unofficial Android TV streaming boxes. Today, we'll dig through digital clues left behind by the hackers, network operators, and cybercrime services that appear to have benefitted from Kimwolf's spread.

Source: Krebs on Security KrebsOnSecurity.com celebrates its 16th anniversary today! A huge "thank you" to all of our readers -- newcomers, long-timers and drive-by critics alike. Your engagement this past year here has been tremendous and truly a salve on a handful of dark days. Happily, comeuppance was a strong theme running through our coverage in 2025, with a primary focus on entities that enabled complex and globally-dispersed cybercrime services.

Source: PortSwigger Research Manual testing doesn't have to be repetitive. In this post, we're introducing Repeater Strike - a new AI-powered Burp Suite extension designed to automate the hunt for IDOR and similar vulnerabilities

Source: PortSwigger Research Tired of repeating yourself? Automate your web security audit trail. In this post I'll introduce a new Burp AI extension that takes the boring bits out of your pen test. Web security testing can be a

Source: PortSwigger Research Have you ever wondered how many vulnerabilities you've missed by a hair's breadth, due to a single flawed choice? We've just released Shadow Repeater, which enhances your manual testing with AI-powere

Source: PortSwigger Research HTTP cookies often control critical website features, but their long and convoluted history exposes them to parser discrepancy vulnerabilities. In this post, I'll explore some dangerous, lesser-known

Source: PortSwigger Research The strength of our URL Validation Bypass Cheat Sheet lies in the contributions from the web security community, and today’s update is no exception. We are excited to introduce a new and improved IP a

Source: PortSwigger Research We're delighted to announce three major research releases from PortSwigger Research will be published at both Black Hat USA and DEF CON 32. In this post, we'll offer a quick teaser of each talk, info

Source: PortSwigger Research Security research involves a lot of failure. It's a perpetual balancing act between taking small steps with a predictable but boring outcome, and trying out wild concepts that are so crazy they might

Source: PortSwigger Research In this post, I'll share my approach to developing custom automation to aid research into under-appreciated attack classes and (hopefully) push the boundaries of web security. As a worked example, I'l


Local News (IP-based) / Informations locales (basées IP)

Source: Le Dauphiné Isère Sud Ce dimanche 28 octobre 2012, des chutes de neige abondantes surprennent tout le monde. Une telle quantité tombée à Grenoble en octobre, c'est du "jamais vu depuis au moins 70 ans", selon les météorologues de MeteoNews. Avec la couche épaisse de poudreuse sur les routes, la pagaille gagne rapidement le centre-ville de la capitale des Alpes. Retour en images.

Source: Le Dauphiné Isère La 24e journée de Ligue 2 faisait escale au Stade des Alpes, ce vendredi soir. Le duel de mal classés entre le Grenoble Foot 38 (12e) et Boulogne-sur-Mer (13e) a accouché d'un match assez insipide, et guère d'occasions de s'enflammer (0-0). Au moins, ce match nul et vierge permet aux Isérois de rester devant les Nordistes au classement. Revivez le match dans les conditions du direct.

Source: Le Dauphiné Isère Sud Le week-end dernier auraient dû avoir lieu les 32e de finale de Coupe de France de football. Mais le Covid-19 est passé par là... Alors, pour le plaisir, on vous propose de revivre, en images, l'un des plus grands exploits de l'histoire de la compétition. C'était il y a 6 ans, jour pour jour. Dans un Stade des Alpes en fusion, le GF38 de Bengriba et Cattier éliminait l'OM de Bielsa et Thauvin, alors leader de Ligue 1. Inoubliable.

Source: Le Dauphiné Isère Sud Pour la première fois de leur histoire, le FCG et le VRDR se croisaient sur un terrain de rugby en octobre 2019 au Stade des Alpes. Ce "derby du Dauphiné" inédit tournait largement en faveur des Isérois (49-8). Retour en images sur la soirée dans l'enceinte grenobloise.

Source: Le Dauphiné Isère Retrouvez tous les résultats de la 41e journée de la Ligue Magnus disputée ce vendredi.

Source: Le Dauphiné Isère Alors qu’une manifestation se tenait à l’extérieur, le candidat RN Valentin Gabriac a présenté sa liste aux élections municipales lors d’un meeting à l’office de tourisme.

Source: Le Dauphiné Isère Le FC Grenoble a décroché une précieuse victoire sur la pelouse de Nevers (26-27). Les Isérois, avec un essai de Soury et une transformation de Navizet, ont fait la différence après la sirène. Ils mettent fin à leur série noire de dix défaites à l'extérieur. Avec 45 points, les Bleu et Rouge sont neuvièmes à dix points de Brive sixième et dernier qualifié pour les phases finales. Revivez la rencontre en direct é.

Source: Le Dauphiné Isère Les deux frères arrêtés jeudi 26 février, l’un à Grenoble et l’autre en Meurthe-et-Moselle, sont ceux qui avaient prétendu avoir été visés sans raison par des tirs alors qu’ils étaient à bord d’un fourgon, dimanche soir rue Normandie-Niemen à Échirolles. La vidéosurveillance a révélé qu’ils avaient eux aussi fait usage d’armes à feu et qu’ils étaient avec un troisième homme, lequel a également été interpellé.

Source: Le Dauphiné Isère Environ 400 personnes se sont rassemblées sur la place Grenette, à Grenoble, pour protester contre la tenue d'un meeting du candidat Rassemblement National à l'élection municipale de Grenoble Valentin Gabriac qui se tient à l'amphithéâtre du centre-ville.

Source: Le Dauphiné Isère Le tribunal correctionnel de Grenoble a relaxé un professeur d’histoire-géographie prévenu d’avoir proféré des injures antisémites à l’égard d’Hervé Gerbi. Il avait collé des affiches comparant la poignée de main entre Hitler et Philippe Pétain en 1940 et celle entre un soldat israélien et le candidat à l’élection municipale de Grenoble.

Source: Le Dauphiné Isère Sud Les séismes de magnitude égale ou supérieure à 5 sur l’échelle de Richter sont inhabituels en France. Dans le grand Sud-Est, ce seuil n’a été dépassé que cinq fois au cours des 60 dernières années : en Ardèche, en Isère, dans les Alpes-de-Haute-Provence et les Hautes-Alpes, ainsi qu’en Haute-Savoie. Voici les tremblements de terre les plus puissants enregistrés dans nos départements.

Source: Le Dauphiné Isère Sud Le FC Grenoble Rugby reçoit ce vendredi 10 novembre à 21 h, pour la compte de la 10e journée de Pro D2, son "voisin" de Valence Romans Drôme Rugby. Il ne s'agit que de la quatrième confrontation entre ces deux équipes, le VRDR ayant vu le jour en 2016. En revanche, les matches depuis les années 50 jusqu'aux années 90 n'ont pas manqué entre les Isérois et les Drômois de Valence et de Romans. Retour en images sur ces "derbys" d'antan et de maintenant.

Source: Le Dauphiné Isère Sud Le soir du 13 novembre 2015, neuf hommes ont mené une série d’attaques aux abords du Stade de France à Saint-Denis, de terrasses de restaurants et dans la salle de concerts du Bataclan à Paris, faisant 130 morts et plus de 350 blessés. Après l'horreur, dans les jours qui ont suivi, des rassemblements avaient été organisés à Grenoble, Valence ou encore Annecy. Dans d'autres villes, on a pleuré la perte de proches tués lors de ces attentats comme à Gap, Jarrie ou Gilly-sur-Isère. Retour en images sur ces moments de recueillement.

Source: Le Dauphiné Isère Sud Alors que la Foire des Rameaux revient ce week-end à Grenoble, nous faisons une dernière plongée dans nos archives. On remonte le temps jusque dans les années 1980 avec un forain qui menace de s'immoler par le feu et Alain Carignon qui fait du toboggan. Retour en images.


French Government / Gouvernement & Services Publics

Source: ANSSI (CERT-FR) De multiples vulnérabilités ont été découvertes dans les produits Microsoft. Certaines d'entre elles permettent à un attaquant de provoquer un déni de service à distance, une atteinte à la confidentialité des données et un contournement de la politique de sécurité.

Source: ANSSI (CERT-FR) Une vulnérabilité a été découverte dans Cisco Catalyst SD-WAN. Elle permet à un attaquant de provoquer un contournement de la politique de sécurité. Cisco indique que la vulnérabilité CVE-2026-20127 est activement exploitée.

Source: ANSSI (CERT-FR) Ce bulletin d'actualité du CERT-FR revient sur les vulnérabilités significatives de la semaine passée pour souligner leurs criticités. Il ne remplace pas l'analyse de l'ensemble des et alertes publiés par le CERT-FR dans le cadre d'une analyse de risques pour prioriser l'application des...


France News / Journaux France

Source: Ouest-France Soirée très particulière pour le Stade briochin, ce 27 février, lors de la 22e journée de National disputée face au Puy foot. Le club breton a pris seulement un petit point et a perdu deux joueurs, l’un est suspendu, l’autre est à l’hôpital. Aimeric Gomis a perdu connaissance après un choc, la rencontre a été interrompue pendant 23 minutes.

Source: Le Figaro CONFIDENTIELS - Chaque semaine, la rédaction emmène ses lecteurs dans les coulisses de la vie politique.

Source: Le Monde Missionné depuis début septembre par le gouvernement pour proposer une réforme de la gouvernance du PMU, il a vu sa mission récemment prolongée. En dépassant les six mois, son mandat de député tombe donc automatiquement.

Source: Le Monde Face à l’influence grandissante d’une internationale d’extrême droite, de courts essais et articles consacrés à la lutte contre le fascisme du philosophe et de l’écrivain italiens Antonio Gramsci et Umberto Eco, ainsi qu’un ouvrage sur le penseur américain Herbert Marcuse sont disponibles dans les librairies.

Source: Le Monde L’éviction, en janvier, du général le plus haut gradé de l’Armée populaire de libération témoigne de la volonté du président Xi d’assurer sa prééminence politique et de lutter contre la corruption, bien réelle, des élites, décrypte le chercheur James Char dans un entretien au « Monde ».

Source: Le Monde Pendant quatre jours, le bureau du procureur de la CPI s’est efforcé de convaincre les juges de cette cour de mettre en accusation l’ancien président pour crimes contre l’humanité commis dans le cadre de sa lutte contre la drogue. Les juges ont soixante jours pour décider s’ils renvoient en procès l’ex-dirigeant.

Source: Le Figaro Accusé d’antisémitisme après ses propos sur Jeffrey Epstein lors d’une conférence à Lyon jeudi soir, les responsables politiques se soulèvent contre les déclarations du leader de La France Insoumise.

Source: Le Figaro À Mort de Quentin : ciblé de toutes parts, Mélenchon convie des «nouveaux médias» acquis à sa cause pour se défendre Lors d’une conférence de presse dont ont été tenus écartés les journalistes des médias traditionnels, le leader insoumis a réitéré ce lundi sa «sympathie» à la Jeune Garde, dont plusieurs membres sont accusés d’avoir participé au lynchage mortel du jeune militant nationaliste à Lyon.


Weather / Météo

Source: Météo-Paris Le blizzard de la fin du mois de décembre 1970 qui a provoqué un chaos monumental sur l'autoroute du soleil entre Valence et Montélimar. Une tempête de neige de l'ampleur qu'a connu New-York ces derniers jours. archives meteo-paris.com (photo colorisée) New York vient de subir sa plus forte tempête de neige depuis plusieurs décennies avec 50 cm à Central Park ! Un tel blizzard peut-il se produire en FraIA nce ? Notre article vous apporte les explications. Un demi-mètre de neige à New York ! New York n'est pas étrangère aux tempêtes de neige. Malgré tout, celle qui est survenue ce lundi 23 février 2026 a présenté une intensité peu commune. Les cumuls de neige atteignent 50 centimètres à Central Park dans le cœur de la ville et 58 cm à la station de l'aéroport de La Guardia ! Une tempête de neige d'une telle intensité est rare, ne se produisant en moyenne que tous les 25 ans sur New York ! L'île de Long Island fut encore plus touchée avec jusqu'à 74 cm mesurés à l'aéroport MacArthur, un cumul inédit depuis 1963 ! 50 cm de neige ont recouvert New York (USA) ce lundi 23 février 2026 - photo Pictures of New York Malgré tout, il ne s'agit pas de la tempête la plus importante sur New York. En effet, la plus forte tempête est survenue les 22 et 23 janvier 2016, il y a seulement 10 ans. À l'époque, il était tombé 70 centimètres de neige à Central Park, un cumul qui n'avait jamais été mesuré depuis le début des relevés météorologiques en 1869 ! La vie locale avait été fortement ralentie et les voitures avaient été ensevelies sous ce manteau neigeux inédit. 70 cm de neige à New York après la tempête de janvier 2016, un record ! - photo Jackson Krule De telles tempêtes de neige sont-elles possibles en France ? Des tempêtes de neige de la dimension de celle qui vient de toucher le nord-est des USA ne sont pas véritablement possibles en France et ce pour des raisons géographiques. En effet, lorsque l'air polaire descend sur le nord-est des États-Unis, il atteint l'océan Atlantique sans avoir rencontré de mer. Ainsi, il est encore particulièrement froid. Cet air glacial crée un gros contraste thermique avec les eaux douces de l'Atlantique, ce qui peut générer une cyclogenèse explosive au dessus de l'océan avec des dépressions se creusant très rapidement et générant des tempêtes de neige de forte intensité sur la côte est américaine, comme ce fut le cas ce lundi 23 février 2026. Image satellite sur l'est des USA et l'Atlantique le lundi 23 février 2026 - NASA En France, la donne est différente car notre pays est entouré par plusieurs mers et par l'Atlantique. Il est donc impossible pour une masse d'air polaire d'atteindre notre pays intacte. Les coulées d'air froid touchant l'hexagone sont beaucoup moins intenses que celles qui touchent le nord de l'Amérique et le contraste avec la douceur des eaux océaniques s'en trouve donc réduit, ce qui donne des dépressions moins intenses. Malgré tout, on peut tout de même observer des creusements dépressionnaires à la rencontre entre air doux et air froid, qui peuvent engendrer de fortes tempêtes de neige et du blizzard, comme ce fut le cas en Normandie en mars 2013. On peut également évoquer le blizzard survenu au début du mois de janvier 1979, entre la Beauce et Paris, ainsi que celui de février 1986, qui a touché pratiquement les mêmes régions. Par ailleurs, durant la nuit du 25 au 26 février 1958, un violent blizzard a frappé le Nord-Pas-de-Calais, entraînant la formation de congères pouvant dépasser deux mètres de hauteur. Dans le sud-est de la France, les épisodes de blizzard résultent généralement de la combinaison d’une profonde dépression en Méditerranée et d’une masse d’air polaire. La situation fut particulièrement chaotique en février 1954, aux alentours de Perpignan, en février 1956 en Provence, ainsi qu’à la fin du mois de décembre 1970 dans la moyenne vallée du Rhône. Cette liste n’est bien sûr pas exhaustive : de nombreux autres épisodes ont également provoqué, dans nos régions de plaine, la formation de congères atteignant plusieurs mètres de hauteur. Congères de 2 mètres à Gonneville dans le Cotentin le 13 mars 2013 - via infoclimat.fr En résumé : les tempêtes de neige qui touchent la France sont liées à des dépressions généralement moins creuses et moins vastes qu'aux États-Unis, à cause de conflits thermiques moins exacerbés entre air polaire et air océanique. En plus d'être généralement moins fortes, elles sont aussi bien moins fréquentes que de l'autre côté de l'Atlantique où elles surviennent chaque hiver. Malgré tout, les conflits de masses d'air qui concernent l'Europe peuvent tout de même aboutir à des épisodes neigeux majeurs. À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Congères impressionnantes dans les rues de New York en mars 1888 - archives New York vient de vivre une impressionnante tempête de neige. Toutefois, la ville a connu des blizzards encore plus impressionnants par le passé. Notre article vous propose de découvrir les quatre tempêtes les plus marquantes de l'histoire new-yorkaise. #4 - Tempête de Mars 1888 : 53 cm En terme de conséquences directes, il s'agit de la tempête de neige la plus dévastatrice de l'histoire de New York ! Et pourtant, elle ne se classe qu'au 4ème rang en terme de quantité de neige tombée avec un cumul de 53 cm sur la ville. Malgré tout, elle est marquée par des vents très violents, ayant formé des congères atteignant parfois 1 à 2 mètres au cœur de la ville ! À l'époque, il n'y a pas de prévision météo et personne ne se prépare à cet épisode. La ville s'en trouve paralysée et les lignes électriques sont endommagées. Cette tempête causera plus de 400 décès, dont environ 200 à New York City. Les rues de New York après la tempête de neige remarquable de mars 1888 - archives #3 - Tempête de Décembre 1947 : 67 cm La tempête de neige survenue au lendemain de Noël 1947 reste à ce jour sur le podium des plus fortes à avoir frappé New York. Le 26 décembre 1947, la ville se retrouve ensevelie sous un manteau neigeux de 67 cm ! La ville se retrouve alors totalement paralysée. De plus, ce blizzard avait été très mal anticipé par les services météo de l'époque, qui n'avaient pas prévenu la population. Cette tempête de neige a coûté la vie à 77 personnes dans le nord-est des USA. 67 cm de neige dans les rues de New York après la tempête de neige le 27 décembre 1947 - photo Al Fenn #2 - Tempête de Février 2006 : 68 cm Parmi les tempêtes de neige les plus marquantes sur New York, celle survenue les 11 et 12 février 2006 a marqué les esprits. À l'époque, une dépression se creuse autour de 970 hPa au large des côtes américaines et des chutes de neige particulièrement forte affectent les États du nord-est des USA. New York est particulièrement touchée et reçoit 68 centimètres de neige, un cumul qui - à l'époque - établit un record absolu ! New York City estime que les opérations massives de déneigement ont coûté environ 27 millions de dollars à la ville. New York sous le blizzard lors de la tempête du 12 février 2006, apportant 68 cm ! - photo Wikipedia #1 - Tempête de Janvier 2016 : 70 cm Si les tempêtes pré-citées ont été remarquables, elles ne sont pas - climatologiquement parlant - celles ayant apporté le plus gros cumul de neige. La plus importante tempête de neige à New York est récente puisqu'elle est survenue les 22 et 23 janvier 2016, il y a seulement 10 ans. À l'époque, il était tombé 70 centimètres de neige à Central Park, un cumul qui n'avait jamais été mesuré depuis le début des relevés météorologiques en 1869 ! La vie locale avait été fortement ralentie et les voitures avaient été ensevelies sous ce manteau neigeux inédit. 70 cm de neige à New York après la tempête de janvier 2016, un record ! - photo Jackson Krule Précision : Les chiffres cités dans cet article sont issus des mesures de la hauteur de neige à Central Park, qui sert de référence pour la ville de New York. Ainsi, les météorologues et climatologues américains classent l'importance des blizzards new-yorkais sur cette base. À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Le château de Versailles sous la neige dans une ambiance polaire le 15 février 2010 – Archives Météo-Villes Un mois de janvier exceptionnellement neigeux ! Après un automne 2009 plutôt doux, l'hiver a fait un retour remarqué dès le mois de décembre sur la France. Une première vague de froid s'est en effet propagée jusqu'à la France à partir du 13 décembre avec de l’air glacial venu de Russie envahissant tout le pays et de la neige sur de nombreuses régions, atteignant même la région de Nice le 18 décembre. Cette première vague de froid se termine quasiment le jour de Noël, mais la douceur qui caractérise les derniers jours de l’année ne sera que très éphémère. Après une courte pause plus douce donc, le froid revient de plus belle à partir du 31 décembre 2009. Le début du mois de janvier 2010 est ainsi très hivernal sur la quasi totalité de la France avec de première chutes de neige le 1er janvier entre les côtes de la Manche et l'Île-de-France (10cm en Seine-Maritime). Les 3 et 4 janvier, c'est sur le sud de la France que la neige se montre plus étendue, tombant du Poitou à la région Rhône-Alpes en passant par le Limousin et l'Auvergne, on mesure 20cm à Grenoble, 13cm à Lyon et 8cm à Clermont-Ferrand. 8 cm à Lyon le 4 janvier 2010 – Archives Météo-Villes Le 6 janvier, la neige tombe par averses en Normandie avec 20cm mesurés à Honfleur. Le lendemain, on mesure 30cm à Alençon, 20cm à Dreux, Chartres et entre le sud du 78, 91 et 77 au passage d'une goutte froide alors qu'il neige également sur le sud-est entre le Languedoc-Roussillon et la Camargue puis jusqu'en vallée du Rhône et dans les Alpes la nuit suivante. Alors que la matinée du 8 janvier est glaciale dans le nord du pays avec jusqu'à -20,6°C 0 Brétigny-sur-Orge, battant le record de 1985, de très fortes chutes de neige persistent sur le sud. On mesure jusqu'à 50cm à Gap, 35cm à Grenoble, 30cm à Orange et même 20cm en Camargue. Ces chutes persistent le lendemain avec des cumuls devenant impressionnants dans l'intérieur du Languedoc-Roussillon jusqu'au Tarn. Certains villages d'altitude sont coupés du monde avec parfois plus de 50 à 60cm cumulés ! 60cm de neige le 9 janvier 2010 aux Martys (900m) dans le département de l'Aude – Photographie : meteo81 La neige remonte d'ailleurs de nouveau sur le nord du pays durant cette journée, apportant 2 à 10cm supplémentaires jusqu'en Alsace, Bourgogne, Franche Comté et même en Bretagne. De très nombreuses région françaises sont ainsi sous la neige. Après une nouvelle perturbation accompagnée de neige (3 à 7cm sur une large partie du nord et de l'est du pays) et de pluie verglaçantes entre les Pays de la Loire, la Bretagne et la Basse-Normandie, le temps se montrera plus sec jusqu'au 20 janvier avec un redoux très progressif sur la majorité du pays. Celui-ci restera néanmoins une nouvelle-fois de courte durée. Une troisième vague de froid envahira en effet la France à partir du 25 janvier 2010. Le 28 janvier, de nouvelles averses de neige ont déjà lieu sur un large quart nord-est jusqu'au nord des Alpes avnat une perturbatioj plus active le lendemain, apportant des chutes plus franches en Champagne-Ardenne, Lorraine, Bourgogne, Franche-Comté et au sud de l’Alsace. La neige tient surtout au-dessus de 300m d’altitude. Le 31 janvier, la neige atteint de nouveau la Côte d'Azur avec plusieurs centimètres entre Nice et Fréjus, surtout entre Cannes et Saint-Raphaël où les plages sont bien blanchies. Dans le même temps, on mesure -14°C à Aurillac et -11°C à Nevers. Plages blanchies par la neige à Cannes le 31 janvier 2010 au matin – Archives Météo-Villes Un mois de février tout aussi hivernal ! Le mois de février débute donc sous le froid et la neige sur de nombreuses régions. Dès le 1er févier, des chutes de neige sont observées sur une partie du nord et de l'est avec par exemple 5/7cm à Lyon. Le 2 février, on mesure 30cm de neige dès 200m d'altitude dans le nord-est avant un net redoux entre le 3 et le 4 sur la totalité du pays. À La Mure, on passe par exemple de -16,9°C en début de nuit à 10.7°C durant la journée ! Encore une fois, le redoux restera temporaire puisque la 4ème vague de froid de la saison envahira la France le 9 février avec notamment le retour de la neige sur le nord dès la fin de journée. Le 10 février, on observe des averses de neige sur une large partie du nord et du centre du pays, plus fréquentes sur le Pas-de-Calais. C'est néanmoins le 11 février que la neige va se montrer plus généralisée et même parfois forte une nouvelle-fois sur la Côte d'Azur. On mesure en effet 5cm à Nice en fin de journée du 11 février, 10 à 15cm à Cannes et jusqu'à 30-40cm dans la région de Grasse dès 200m d'altitude ! Les jours suivants seront plus secs mais généralement glaciales sur la quasi-totalité du pays, excepté entre la pointe Bretonne, la Côte d'Azur et le littoral de la Corse avec des gelées généralisées et parfois fortes, les minimales descendant souvent sous les -10/-15°C. Les chutes de neige successives et le froid deviennent pesants pour bon nombre de français - Une du 10 février 2010 Cette 4ème vague de froid prendra fin le 17 février avec la reprise d'un flux océanique très perturbé mais aussi plus doux. Plusieurs tempêtes successives concerneront d'ailleurs la France à la fin du mois de février, notamment la fameuse tempête Xynthia le 27 février au soir. L'hiver n'avait néanmoins pas dit son dernier mot, celui-ci faisant un dernier retour durant le mois de mars. Ce début d'année 2010 fut donc exceptionnellement neigeux sur notre pays, le plus neigeux depuis l'hiver 1986-1987, si bien que peu de régions ont été épargnées par les flocons entre décembre, janvier et février. À lire également : >>> Février 1991 : vague de froid et neige de la Bretagne à la côte d'Azur >>> Février 1956 : la pire vague de froid du XXème siècle en France ! >>> Février 1954 : une vague de froid pas comme les autres >>> -20°C en plaine : retour sur la vague de froid de février 2012 >>> Jusqu'à 70 cm de neige en plaine fin janvier en Roussillon ! >>> Un premier hiver de la seconde guerre mondiale glacial ! Auteur : Tristan Bergen

Source: Météo-Paris Après un début d'année particulièrement arrosé, l'anticyclone rétablit un temps calme qui semble parti pour durer. Peut-on craindre une sécheresse malgré un hiver très pluvieux ? Notre article vous donne des éléments de réponse. Des nappes très bien rechargées ! Après un début 2026 particulièrement pluvieux, les nappes phréatiques ont pu se recharger efficacement. À quelques jours du printemps météorologique et à l'approche de la fin de saison de recharge des nappes, la situation est plus que satisfaisante dans la majeure partie des régions. 70% des nappes de France affichent des niveaux égaux ou supérieurs à la normale. Autre bonne nouvelle, l'Aude et les Pyrénées-Orientales - qui souffraient d'une sécheresse chronique - ont reçu des pluies très abondantes et leurs nappes sont remontées à des niveaux inédits depuis de longues années. Niveau des nappes phréatiques ce jeudi 26 février 2026 - info-secheresse.fr Outre la situation en profondeur, il faut également évoquer la situation en surface. Après ce début d'année 2026 exceptionnel : l'indice d'humidité des sols atteint des records ! Il y a quelques jours, l'indice à échelle nationale était à son plus haut niveau depuis le début des mesures pour cette époque de l'année ! D'ailleurs, il arrive même sur le podium des situations où les sols ont été les plus humides en France, toutes dates confondues. Seuls décembre 1982 et janvier 1994, théâtres de graves inondations, avaient connu une humidité moyenne légèrement supérieure à la situation actuelle. Indice d'humidité des sols en moyenne nationale du 18 février 2025 au 17 février 2026 - Météo France En résumé : en cette fin février 2026, nous sommes aux antipodes d'une situation de sécheresse avec des sols gorgés en humidité en surface couplés à des nappes phréatiques affichant des niveaux élevés dans de nombreuses régions. Une sécheresse reste-t-elle possible d'ici l'été ? Avec des nappes phréatiques à des niveaux souvent très satisfaisants, le spectre de la sécheresse est forcément moins important qu'il n'a pu l'être au cours des dernières années. Cependant, il faut surveiller le tournant actuel. En effet, il semblerait que le retour de conditions météorologiques plus sèches s'inscrive dans la durée. Les dernières tendances pour mars 2026 envisagent un mois sec en France, voire très sec dans la moitié sud où le déficit pluviométrique pourrait être marqué. À une saison où la végétation en éveil est gourmande en eau, les sols seront donc amenés à s'assécher. Anomalie pluviométrique envisagée en Europe en mars 2026 - NOAA Les nappes phréatiques hautes ne nous protègent pas d'un risque de sécheresse superficielle. Comme son nom l'indique, elle se traduit par un déficit prononcé d'humidité des sols en surface, pouvant altérer le bon développement de la végétation. C'est pourquoi on l'appelle souvent "sécheresse agricole". Contrairement à la sécheresse en profondeur (liée aux nappes), la sécheresse superficielle peut apparaître en seulement quelques semaines lorsque l'anticyclone s'installe et que la pluie manque, surtout si l'ensoleillement est important et que les températures sont élevées. La sécheresse superficielle des sols peut apparaître en quelques semaines - photo Fabrice Elsner Bien Ainsi, le risque d'une importante sécheresse en profondeur semble très limité cette année, grâce au niveau des nappes élevé en sortie d'hiver. En revanche, un printemps sec et chaud suffirait à assécher considérablement les sols en surface et pourrait occasionner une sécheresse de surface, même si les nappes sont hautes. Il est important de différencier ces deux types de sécheresses, qui peuvent se produire indépendamment l'une de l'autre. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris À la fin de l’hiver 1963, l’épaisseur du manteau neigeux est parfois spectaculaire dans les massifs montagneux, notamment dans les Vosges, où elle dépasse localement 10 mètres sur les plus hautes crêtes. Archives meteo-paris.com L'hiver 1962-63 fut le plus long depuis des siècles Après une fin des années 1950 et un début des années 1960 relativement cléments, l’hiver 1962-1963 s’impose comme l’un des plus longs et des plus marquants du XXe siècle. À Paris, il devient le plus froid enregistré depuis l’hiver 1879-1880. Le gel apparaît dès la mi-novembre 1962 et s’installe durablement, ne s’interrompant que brièvement jusqu’au début du mois de mars 1963. Au même moment, la guerre d’Algérie s’achève, provoquant l’exode massif des pieds-noirs vers la métropole. Habitués à des hivers plus doux, ils découvrent une France au climat presque polaire. À Marseille, les paquebots Ville-d’Oran et Hairouan voient même leur départ pour Alger retardé d’une journée en raison de la neige et du froid intense. Partout, le pays se fige. À Deauville, les yachts restent prisonniers des glaces, tandis que des milliers de péniches sont immobilisées sur des canaux et des rivières gelés. Dès la fin décembre, le Rhin, le Rhône et la Seine charrient des glaçons, bientôt rejoints par la Garonne et la Loire. La Bretagne est loin d'échapper à cette vague de froid monumentale... ici, à l'entrée de Rennes, à la fin du mois de février 1963- archives meteo-paris.com La période du 12 janvier au 6 février est la plus rigoureuse Car cet épisode est arquée par un gel quasi permanent. Les températures atteignent des niveaux exceptionnels : -27 °C à Ambérieux, -26 °C à Vichy, -23 °C à Lyon, -18 °C à Montpellier, -14 °C à Dinard et -13 °C à Paris. Marseille connaît sa quatrième chute de neige de l’hiver, avec encore 20 cm supplémentaires. La pénurie de combustibles refait surface : la consommation de charbon augmente de 40 % et celle de fuel double. Début février, de nombreux locataires des HLM parisiens se retrouvent sans chauffage. L’Union des Vieux de France réclame une allocation d’urgence pour les personnes âgées. Aux confins de la Bourgogne, du Berry, de la Lorraine et de l’Isère, quelques loups venus d’Europe de l’Est sont aperçus, poussés par le froid. Mieux vaut tirer partie de la situation… À Carcassonne, sous plusieurs dizaines de centimètres de neige au mois de février 1963 - photo colorisée - archives meteo-paris.com - 29°C dans l'Hérault... une banquise sur le littoral de la Mer du Nord... ! L’intensité du gel est telle qu’une banquise se forme sur le littoral de la mer du Nord, de Dunkerque jusqu’à La Panne, en Belgique. La mer gèle également en Charente-Maritime, à La Courbe. Tous les grands fleuves charrient des glaçons, certains se retrouvant même localement totalement pris par les glaces. Le 4 février, une violente tempête de neige paralyse le Languedoc-Roussillon et la Corse. Des usines s’effondrent sous le poids de la neige. À Saint-Martin-de-Londres, la température chute à -29 °C, détruisant des vergers entiers. Sur la Côte d’Azur, la production florale des serres d’Antibes est anéantie. Un redoux temporaire, les 6 et 7 février, laisse espérer une amélioration, mais le froid reprend rapidement ses droits. Les 19 et 20 février, de nouvelles chutes de neige recouvrent le pays. En région parisienne, 15 à 20 cm de poudreuse transforment les pentes en pistes de ski improvisées. Des skieurs sur l'esplanade du Trocadéro, devant la Tour Eiffel, après les chutes de neige de la fin du mois de février 1963 - photo colorisée - archives meteo-paris.com De très nombreux décès... et un dégel très tardif En mars, le dégel provoque d’importants dégâts sur les routes. Si l’agriculture souffre moins qu’en 1956, les blés d’hiver sont partiellement détruits. En France, le nombre de décès liés à cet hiver exceptionnel atteint 30 000, un bilan alarmant en raison de la durée et de l’intensité du froid. Cet hiver 1962-1963 se révèle également remarquable par son ampleur à l’échelle mondiale : l’Est des États-Unis, le Canada, la Chine, le Japon, la Sibérie et l’ensemble de l’Europe occidentale connaissent des conditions extrêmement rigoureuses, tandis que l’Alaska, l’Islande, l’Afrique du Nord, le Moyen-Orient et l’Inde bénéficient d’une douceur inhabituelle. >>> Après la guerre, l’épreuve du grand froid de l’hiver 47-48 >>> Le supplice du terrible hiver 1917 >>> Jusqu'à 60 cm de neige sur la Côte d'Azur à la fin du mois de février ! >>> -40 au vent à Marseille et Dunkerque bloqué par la banquise : c'est possible ! >>> Février 1963 polaire, au terme de l'hiver le plus long du 20e siècle >>> Février 1954 : une vague de froid pas comme les autres Auteur : Guillaume Séchet

Source: Météo-Paris Alors que la douceur va s'affirmer dans les prochains jours, on se demande si l'hiver est terminé en France. Toutefois, le passé lointain comme récent nous ont montré que l'hiver peut encore se manifester bien au delà du mois de février. Congères de 2 mètres à Gonneville dans le Cotentin (50) le 13 mars 2013 - via infoclimat.fr D'importantes coulées froides peuvent encore survenir Lorsque les jours rallongent et que le pôle Nord se réchauffe, le vortex polaire - qui concentre la majorité de l'air froid aux hautes latitudes - devient moins stable. C'est à dire qu'il devient de moins en moins compact. Il peut alors se déformer et entraîne avec lui des ondulations plus importantes du courant jet. Ainsi, des coulées d'air froid peuvent d'échapper du pôle en direction des latitudes moyennes comme la France. Il est donc tout à fait classique d'observer des coups de froid tardifs sur nos régions en mars, ce pourquoi il ne faut jamais enterrer l'hiver trop vite. Schéma d'un vortex polaire instable et d'un courant jet ondulant (classique au printemps) - NOAA Il suffit de regarder les relevés du passé pour se rendre compte qu'un froid marqué peut encore survenir au cours du mois de mars. À Paris-Montsouris, on peut encore descendre sous les -5°C durant la première partie du mois. L'exemple le plus récent date du 13 mars 2013 avec une température qui affichait -5,5°C à l'aube. De plus, on peut encore assister à des journées sans dégel jusqu'à début mars. D'ailleurs, le jour sans dégel le plus tardif est assez récent puisqu'il s'agissait également du 13 mars 2023 où le thermomètre n'avait pas dépassé -1,4°C dans la capitale ! Températures minimales et maximales les plus basses mesurées à Paris-Montsouris en mars depuis 1886 - infoclimat.fr Offensives hivernales en mars : des exemples récents Il est donc important de rappeler qu'il est beaucoup trop tôt pour enterrer l'hiver. Si l'hiver météorologique s'achève au 28 février, le froid et la neige en plaine peuvent encore se manifester bien après ! Rappelons qu'il peut encore neiger sur toute la France au cours du mois de mars. Il n'y a d'ailleurs pas besoin d'aller fouiller dans les archives lointaines pour retrouver des épisodes hivernaux marquants en mars. En 2010, l'agglomération de Perpignan s'était retrouvée sous 25 à 40 cm de neige le 8 mars et les températures plongeaient localement jusqu'à -10°C dans le nord-est de l'hexagone ! 30 à 40 cm de neige sur l'agglomération de Perpignan (66) le lundi 8 mars 2010 - Météo Villes Encore plus près de nous, on peut évoquer la tempête de neige historique qui s'était produite de la Bretagne à la Belgique le 12 mars 2013. La Normandie avait été la région la plus touchée et Météo France avait même déclenché la vigilance ROUGE neige dans la Manche et le Calvados. Le vent violent avait causé des congères atteignant 1 à 2 mètres de haut ! Par endroits, les véhicules étaient littéralement ensevelis ! Hors-congères, il était tombé 20 à 40 cm de façon généralisé dans ces départements. Le village des Pieux dans le Cotentin (50), dans la tempête de neige du 13 mars 2013 - via infoclimat.fr S'il n'y a pour le moment pas de réel signal vers un retour du froid, ce dernier ne peut aucunement être exclu alors que nous ne sommes qu'en février. Ces exemples passés nous rappellent que des coulées d'air froid peuvent suivre les premiers pics de douceur printanière. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Tempête Pedro : la goutte d'eau de trop ! >>> 1 mort et de gros dégâts : la tempête Nils a frappé fort ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Les anomalies de température prévue pour mercredi prochain seront très importantes : localement jusqu'à 12°C au-dessus des normales de saison ! Après de longues semaines marquées par une météo très agitée, entre pluies abondantes, vents violents et inondations, la situation commence enfin à s’améliorer. Même si l’hiver n’est pas encore terminé, la fin du mois de février annonce souvent les premiers signes du printemps — et c’est bien ce qui semble se profiler cette année. Retour de l'anticyclone des Açores Depuis plusieurs semaines, l’anticyclone des Açores nous délaisse. Positionné bien trop au sud, il laisse le courant perturbé océanique influencer l’ensemble de l’Europe, y compris la péninsule Ibérique. Dans ce contexte, la France connaît un temps particulièrement instable, ce qui explique des inondations de plus en plus étendues et marquées. La situation s’apprête toutefois à évoluer. Dès ce week-end, les hautes pressions devraient progressivement remonter vers l’Europe continentale. L’anticyclone se centrera alors sur l’Andalousie. Le flux d’ouest océanique restera présent sur la France, mais il sera nettement moins actif : les pluies se cantonneront principalement aux côtes de la Manche. Évolution de la masse d'air prévue entre mardi et mercredi - animation WRF Meteociel Parenthèse printanière d'une à deux journées À partir de lundi, et surtout mardi, les hautes pressions se décaleront davantage vers le continent. Le vent s’orientera au sud, favorisant la remontée d’un air très doux en provenance du Maroc. En cette fin février, le soleil gagne déjà en hauteur dans le ciel. Dans une telle configuration, les températures peuvent grimper plus facilement qu’au cœur de l’hiver. La période de la Saint-Valentin marque d’ailleurs souvent le début du réveil de la nature : certains oiseaux entament leur période de reproduction, ce qui serait à l’origine de cette fête. La fin du mois de février offre régulièrement une ou deux journées aux accents printaniers. Les journée de mardi et surtout mercredi devraient illustrer parfaitement cette tendance : le ciel sera enfin dégagé sur les trois quarts du pays et les températures dépasseront fréquemment les 15 °C l’après-midi. Avec un vent de sud soufflant au pied des Pyrénées, le seuil de chaleur pourrait même être approché, voire atteint localement, avec jusqu’à 25 °C sur le Pays basque. Prévisions des températures METEO-VILLES pour mercredi prochain Cette parenthèse printanière pourrait être d'assez courte durée. Dès jeudi, le courant perturbé océanique devrait reprendre le dessus, entraînant le retour des nuages, de quelques pluies par l’ouest et, mécaniquement, des températures un peu moins agréables, même si la douceur restera d’actualité. À lire également : >>> Et si le mois de mars était très sec ? >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Pourquoi pleut-il autant depuis le début de l'année ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Guillaume Séchet

Source: Météo-Paris La plage de La Ciotat (13) prise d'assaut le dimanche 22 février 2026 - photo mairie Les températures s'envolent à des niveaux records en cette fin de mois de février, atteignant 25°C dans le centre de la France et flirtant avec les 30°C au pied des Pyrénées. Les chaleurs précoces sont de plus en plus fréquentes et faciles à atteindre. Presque 30°C fin février ! Le printemps est déjà là, pour ne pas dire l'été ! Ces dernières heures, les températures s'envolent à des niveaux remarquables en France. Les premières chaleurs de l'année ont concerné le sud-ouest ce mardi 24 février 2026. Au pied des Pyrénées, la barre des 25°C a été allègrement dépassée et on s'est même approché des 30°C dans le Béarn avec une température maximale de 29,5°C mesurée à Saint-Gladie-Arrive-Munein, loin devant son record de 27,0°C en février 2020 ! On peut aussi noter 26,6°C à Biarritz, 26,2°C à Pau, 25,9°C à Saint-Girons ou encore 25,2°C à Dax. Températures maximales relevées au sud-ouest le mardi 24 février 2026 - meteociel.fr Cette surchauffe inhabituelle alors que nous sommes encore en hiver s'est poursuivie ce mercredi 25 février 2026 en s'étendant jusqu'au nord de la France. Le pays a vécu une journée hors norme avec jusqu'à 28°C dans le Pays Basque et des dizaines de records de douceur/chaleur battus jusqu'aux rivages de la Mer du Nord ! Parmi les records les plus marquants, on peut citer 26,5°C à Biscarrosse dans les Landes, 25,6°C à Tiranges en Haute-Loire, 25,2°C à Montgivray dans l'Indre, 25,1°C à Tulle en Corrèze, 24,7°C à Montluçon dans l'Allier, 22,4°C à Orléans dans le Loiret ou encore 22,3°C au Mans dans la Sarthe ! Températures maximales mesurées en France ce mercredi 25 février 2026 - Météo Villes Plus de 100 records mensuels battus le 25 février !! Plus de 100 records de douceur et chaleur ont été battus ce mercredi 25 février 2026, la preuve qu'il s'agissait bien de l'une des journées les plus chaudes, jamais enregistrée pour un mois de février. Par exemple 26,5°C à Biscarrosse (40), 25,6°C à Tiranges (43), 25,2°C à Montgivray (36), 25,1°C à Tulle (19), 24,7°C à Montluçon (03), 22,4°C à Orléans (45), 22,3°C au Mans (72). Beaucoup datent de la fin février 1960, 1990, 1998 ou 2019 >>> liste des records de ce 25 février 2026 ici >>> Carte des records mensuels de température battues le 25 février 2026 - Meteociel.fr Chaleur précoce de plus en plus facile à atteindre Lorsque l'on parle de chaleur précoce, il est difficile de ne pas évoquer les 31,2°C de Saint-Girons (Ariège) le 29 février 1960. Toutefois, il faut préciser que le flux de sud observé à l'époque était nettement plus marqué et que la température de la masse d'air à 1500m flirtait avec les 20°C sur la façade atlantique ! Hier, la masse d'air affichait "seulement" 12 à 14°C à 1500m en Aquitaine, ce qui n'a pas empêché le thermomètre d'approcher les 30°C ! Cela montre à quel point il devient facile d'atteindre des sommets thermiques, même sans masse d'air record. Si la même situation que fin février 1960 se produisait de nos jours, il est probable que nous atteindrions 32-33°C au pied des Pyrénées ! Comparatif des masses d'air observées les 29 février 1960 et 24 février 2026 - meteociel.fr Il faut dire que cela fait plusieurs années que la fin de l'hiver météorologique ressemble souvent au printemps. Nous sommes sur 8 mois de février consécutifs plus doux que la normale en France et avec des anomalies conséquentes puisque 5 des 8 derniers mois de février ont enregistré un écart thermique égal ou supérieur à +2°C ! On ne compte plus les pics de douceur/chaleur records. En février 2025, il avait fait 19,5°C en Belgique. En février 2024, 22°C dans le centre de la France. En février 2021, quasiment 23°C en Alsace. En février 2020, pas moins de 27°C sur la côte basque. Sans oublier la remarquable fin février 2019 avec 20 à 25°C sur la quasi-totalité du pays et 27°C en Aquitaine ! Anomalie thermique (aux normales 1991-2020) en France au mois de février de 1988 à 2026 - Météo France Avec le réchauffement climatique, le mois de février tend à perdre ses caractéristiques hivernales et devient de plus en plus un mois de printemps. Cela engendre un éveil précoce de la végétation, qui est alors surexposée au risque de gel tardif en mars et avril. Pour autant, il reste possible de vivre des mois de février froids en France, comme ce fut le cas pour la dernière fois en 2018. À lire également : >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Le froid et la neige pourront-ils revenir dans les prochaines semaines sur la France ? - Image d'illustration Cette année, le printemps semble avoir pris de l’avance sur la France. La fin février est particulièrement douce, voire chaude, avec des températures exceptionnellement élevées sur de nombreuses régions cette semaine. Ces températures anormalement élevées annoncent-elles la fin de l’hiver et l’absence de retour du froid en France ? Probablement pas... Quel temps pour le mois de mars ? Pour le moment, la majorité des modèles saisonniers s'accordent sur le fait que le mois de mars devrait se montrer plus doux que la normale sur la France mais également plus sec. Les anomalies de températures restent en effet positives sur la France tout comme sur une large partie de l'Europe alors que les anomalies de précipitations restent négatives sur la majorité du pays, excepté près de la Méditerranée où le temps pourrait se montrer plus régulièrement humide. Anomalies de températures et de précipitations sur la France pour le mois de mars 2026 – via TropicalTidBits Dans ce contexte, nous devrions donc retrouver un mois de mars régulièrement anticyclonique sur la majorité de la France avec un temps doux ou très doux en moyenne sur le mois. Aucun signal de retour du froid plus ou moins durable n'est pour le moment envisagé pour ce premier mois du printemps 2026. Mais cela veut-il dire que l'hiver est bel et bien terminé ? Des coups de froid restent-ils possibles ? Une douceur précoce et marquée dès la fin du mois de février ne rime pas forcément avec la fin de l'hiver sur la France. Par le passé, certains pics de douceur durant cette période ont été suivis de retour plus ou moins marqué du froid et même de la neige sur notre pays durant le mois de mars, parfois même plus tardivement. 2021 : Le printemps en février, l'hiver en mars ! Durant la seconde quinzaine du mois de février 2021 par exemple, le printemps semblait déjà s'installer alors même que l'hiver n'était pas encore terminé. Du 16 au 25 février, une douceur exceptionnelle a en effet concerné la France avec des températures restant bien au-dessus des normales de la période. Températures maximales relevées sur la France le 24 février 2021 – Via Infoclimat Le 24 février 2021, on dépassait ainsi les 15-20°C sur la totalité du pays, souvent plus de 21-22°C sur la moitié sud et parfois plus de 24-25°C entre le sud-ouest et le Massif Central. De nombreux records mensuels de chaleur avaient ainsi été battus durant cette période et beaucoup pensaient que l'hiver avait bel et bien pris fin. Pourtant, trois semaines plus tard, entre le 15 et le 23 mars 2021, la neige et le froid avaient décidé de faire leur retour sur notre pays. Sous un flux ayant basculé au nord/nord-ouest puis au nord-est en altitude, de l'air bien froid pour la période s'était en effet engouffré sur la France, apportant d'abord d'abondantes chutes de neige en montagne avant que la neige ne s'invite jusqu'en plaine peu avant l'équinoxe de printemps. On avait en effet pu relever 4-5cm de neige à Clermont-Ferrand le 19 mars alors qu'on y relevait plus de 22°C un mois plus tôt. 4 à 5cm de neige le vendredi 19 mars 2021 au matin à Clermont-Ferrand – Photographie : Daniel Paquet via Twitter : @Danieldeclerm Des chutes de neige avaient également pu être observées à basse voire très basse altitude jusque sur le sud-est de la France, ainsi que du côté des Pyrénées avant un retour au sec par la suite sous un froid persistant. Des gelées étaient en effet observées sur les ¾ de la France alors que le printemps calendaire débutait. 1960 : l'été en février avant le retour de la neige et du gel à la fin du printemps ! La fin du mois de février s'était également montrée anormalement chaude sur la France. Entre le 27 et le 29 février, de l'air chaud en provenance du Sahara avait envahit tout le pays, apportant des températures dignes d'une fin de printemps voire même d'un début d'été. . Durant cette période, les températures se sont également élevées jusqu’à 29°C à Biarritz, 28°C à Pau, 26°C à Clermont-Ferrand, 24°C à Nevers, 22°C à Reims et 21°C à Paris. Sous un effet de foehn, on avait même pu relever jusqu'à 31°C à Saint-Girons en Ariège, un record pour un mois de février en France. Coupure de presse relatant de la douceur/chaleur de la fin février 1960 – Archives Météo-Villes Malgré tout, l'hiver n'avait pas dit son dernier mot sur notre pays. En effet, le froid avait fait un retour brutal et remarqué à la fin du mois d'avril. Du 26 avril au 5 mai, de l'air particulièrement froid pour la période avait réussi à s'engouffrer jusqu'à la France, apportant des chutes de neige jusqu'en plaine sur certaines régions. Le 29 avril, il tombe 5 cm de neige à Belfort et 4 cm à Luxeuil-les-Bains dans les Vosges. Le lendemain, les gelées sont généralisées sur le pays avec -4°C à Limoges, -3°C à Nevers. Ce temps froid persiste jusqu'à début mai, engendrant d'importants dégâts sur les cultures. On peut encore citer d'autres exemples de douceur précoce suivie de coups de froids tardifs, comme l'année 1998 où une douceur exceptionnelle avait concerné la France à la fin du mois de février avant un retour temporaire du froid et de la neige pour Pâques. Ainsi, une période de douceur exceptionnelle ne rime pas forcément avec la fin de l'hiver, des coups de froid temporaires restant encore possibles jusqu'au mois d'avril voire même jusqu'au début du mois de mai. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> La sécheresse va-t-elle succéder aux inondations ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris La vague de froid début par 2 m de neige dans le Midi ! Le 30 janvier 1986, une tempête de neige d’une violence exceptionnelle s’abat sur le Languedoc-Roussillon, l’Ariège et le sud du Massif central. En l’espace d’à peine une journée et demie, les cumuls pulvérisent tous les records : près de deux mètres de neige à Loubaresse, en Ardèche, 1,70 mètre à Réal, dans les Pyrénées-Orientales, et jusqu’à 50 centimètres à Carcassonne. Les routes disparaissent, les villages se retrouvent coupés du monde. L’armée est appelée en renfort. Très vite, la situation devient critique. Un million de personnes sont plongées dans le noir. Le plan ORSEC est déclenché en Ardèche. Dans certaines communes du Massif central, l’électricité ne revient qu’au bout de trois semaines, déclenchant une vive polémique sur la gestion de la crise. L’Ardèche dans la tempête de neige extraordinaire du 30 janvier 1986 - archives meteo-paris.com Plusieurs jours de blizzard entre la Bretagne et la Beauce Cette tempête annonce un mois de février hors normes. Après l’hiver déjà mémorable de 1984-1985, le froid s’installe durablement. Sur la moitié nord du pays, février 1986 devient le mois le plus froid depuis 1956. La vague de froid débute le 5 février et ne lâche prise que le 28. Une durée exceptionnelle, aux conséquences dramatiques. Selon le climatologue Daniel Rousseau, près de 13 000 décès supplémentaires sont recensés par rapport à un hiver jugé « normal ». Les régions situées à la frontière de l’air glacial paient un lourd tribut. La Bretagne, les Pays de la Loire, le Centre, la Bourgogne et Rhône-Alpes sont régulièrement balayés par la neige et de véritables blizzards. Dans le Loiret, le plan ORSEC est à nouveau déclenché : les axes routiers sont paralysés, comme lors de l’hiver 1979. Sur la façade atlantique, les paysages deviennent irréels : 30 centimètres de neige à Pornic, 16 à Lorient. À La Baule ou à Quiberon, des skieurs arpentent les plages. Mi-février 1986 : Les agriculteurs viennent au secours des automobilistes piégés par la neige dans la Beauce - photo meteo-paris.com De la neige jusque sur le littoral de la Corse ! Plus au sud, le froid se fait plus bref, entre le 8 et le 14 février, mais suffisamment intense pour recouvrir de neige toute la Corse, y compris Ajaccio, ainsi que la Côte d’Azur. À Nice, le carnaval est annulé. Le 28 février marque la fin de cet épisode glacial, mais dans la brutalité. De fortes chutes de neige touchent toute la moitié nord, déposant 20 centimètres sur la région parisienne. En Bretagne, des pluies verglaçantes transforment routes et trottoirs en pièges mortels. À Lorient, l’hôpital accueille 75 blessés en seulement huit heures. Les trois quarts des écoles ferment leurs portes. L’hiver 1986 laisse derrière lui un pays éprouvé, figé par le froid et marqué durablement dans les mémoires. La neige à Ajaccio au début du mois de février 1986 - photo meteo-paris.com A lire également : >>> Février 1956 : la pire vague de froid du XXème siècle en France ! >>> Février 1986 : au cœur de trois hivers exceptionnels >>> Notre chronique météo >>> Notre almanach météo >>> Nos bulletins météo réactualisés tous les jours >>> Tendances saisonnières Auteur : Guillaume Séchet

Source: Météo-Paris Schéma des remontées de sable du Sahara qui pourraient se produire sur le France, notamment au début de ce mois de mars 2026 - illustration, reprise dans le livre "Y a plus de saison", Guillaume Séchet, 2008 C'est le retour du sable du Sahara après quelques mois d'absence… Des quantités qui pourraient être plus massives au début du mois de mars. De premières remontées de sable en cours Avec le puissant courant océanique que nous avons connu au cours de ces dernières semaines, il était impossible que du sable du Sahara remonte vers nos régions. Mais la situation a bien évolué… Le vent s'est orienté au sud sur toute l’Europe occidentale et les remontées de sable du Sahara ont commencé à nous intéresser. Cependant, jusqu'à ce week-end, ces remontées de poussière saharienne resteront relativement limitées et peu visibles dans le ciel. Elles vont d'ailleurs s'évacuer vers l'est poussée par un léger courant océanique à partir de vendredi. Simulation des remontées de poussière saharienne à 3000 m d'altitude d'ici mercredi soir - Meteociel Des remontées beaucoup plus massives début mars Pour que ce phénomène soit beaucoup plus massif, il faut qu’une dépression se forme sur la péninsule Ibérique et plonge vers le désert marocain et algérien, happant avec elle d’importantes quantité de sable du Sahara qui traverse la Méditerranée et arrive jusque sur nos régions. Et ce sera justement et probablement le cas au début du mois de mars, lorsqu’une goutte froide s’installera sur l’Espagne et en engendrera d’importantes remontées de poussière vers la France. Si cette échéance est encore assez lointaine, le risque est assez élevé et les scénarios qui vont dans ce sens se suivent et se ressemblent. Prévisions des remontées de sable du Sahara entre le 1er et le 4 mars - météo grecque (université d'Athènes) Un phénomène assez fréquent à la fin de l'hiver et au printemps Le début du printemps est d’ailleurs une période assez favorable pour ce type de phénomènes car des gouttes froides viennent souvent s’isoler sur la péninsule Ibérique; et les remontées chaudes en provenance d’Afrique du Nord sont assez fréquentes. Ce fut par exemple le cas les : >>> 20 mars 2025, >>> 3 mars 2025, >>> 17 février 2025, >>> 6 avril 2024, >>> 30 mars 2024, >>> 20 février 2023, >>> 26 mars 2022 >>> 16 mars 2022 Grosses quantités de sable saharien dans le ciel d'Aguilas (sud-ouest de l'Espagne) ce 14 mars 2022 - photo Jose Gome Ros Auteur : Guillaume Séchet

Source: Météo-Paris Neige à Paris - Place de l'Opéra - fin février 1948 - archives meteo-paris.com Une descente froide polaire particulièrement puissante À la fin de janvier et au début de février 1948, une puissante cellule anticyclonique s’installe sur la Scandinavie et l’Europe du Nord. Cette configuration bloque les perturbations atlantiques et favorise un flux persistant d’air continental très froid en provenance d’Europe orientale et de Russie. Les masses d’air, sèches et glaciales, s’étendent vers l’ouest et le sud-ouest du continent. L'air glacial qui descend de la mer baltique vers la France, le 20 février 1948 - Source : Wetterzentrale Une vague de froid intense à la fin du mois de février Du 20 au 27 février 1948 : le froid et la neige envahissent toute la France - la Bretagne est particulièrement concernée par cette offensive hivernale et la température descend à -13° à Brest où la ville est recouverte d’un épais manteau blanc. Les 22 et 23 février 1948 , une tempête de neige d’une rare violence paralyse la moitié nord. La température descend à -20°C à Clermont-Ferrand et -10 à -12°C en Ile-de-France. La neige atteint même la Côte d’Azur. Les maximales restent fréquemment négatives pendant plusieurs jours consécutifs, et le froid est accentué par des vents parfois soutenus, augmentant la sensation de gel. Évolution des températures à Paris au cours du mois de février 1948 - source : site meteo-climat La capitale ainsi que d'autres grandes villes françaises sont paralysées La neige, parfois abondante, persiste au sol en raison des températures durablement négatives. Dans certaines zones, les cours d’eau gèlent partiellement et les sols restent pris par le gel sur une profondeur inhabituelle. Des chasse-neige font leur apparition dans les rues de Paris où la circulation devient praticable impraticable… La paralysie de la Capitale est donc un sujet majeur. L'utilisation de chasse-neige devenue nécessaire dans Paris à la fin du mois de février 1948 - archives meteo-paris.com Pénuries de charbon et infrastructures fragilisées La vague de froid de février 1948 survient dans un contexte d’après-guerre marqué par des infrastructures fragilisées et des pénuries, notamment de charbon et de combustible. Les transports ferroviaires et routiers sont fortement perturbés par la neige et le gel. L’approvisionnement en énergie devient difficile dans plusieurs régions, entraînant des coupures de chauffage. Sur le plan humain, le froid intense provoque une surmortalité, en particulier parmi les populations les plus vulnérables. L’agriculture subit également des dégâts notables, avec des cultures et des arbres fruitiers affectés par le gel prolongé. Une circulation difficile sur la place de la Concorde à la fin du mois de février 1948 - archives meteo-paris.com La vague de froid de février 1948 est souvent associée, dans les archives météorologiques, au « grand hiver 1947-1948 ». Elle demeure une référence pour l’étude des épisodes de froid extrême en Europe de l’Ouest, tant par sa durée que par son intensité et ses impacts socio-économiques. À lire également : >>> Le dernier hiver de la guerre fut terriblement froid en France... >>> Le supplice du terrible hiver 1917 >>> Le blizzard meurtrier de la fin février 1958 >>> Jusqu'à 60 cm de neige sur la Côte d'Azur à la fin du mois de février ! >>> -40 au vent à Marseille et Dunkerque bloqué par la banquise : c'est possible ! >>> Nos très nombreux articles (1 à 3 par jour) >>> Notre almanach météo les principaux évènements climatiques en France depuis 1850 >>> Notre chronique sur les évènements climatiques depuis 1709 >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Guillaume Séchet

Source: Météo-Paris La rade de Genève prise par la glace lors de la vague de froid de février 1929 - Chronique Météo Villes La vague de froid de février 1929 fait partie des plus intenses ayant touché la France au cours du XXième siècle. Les températures étaient descendues jusqu'à -30 degrés en plaine auvergnate ! Retour sur cet épisode marquant. -30°C en Auvergne : une vague de froid intense ! La vague de froid de février 1929 a été marquante, suivant un froid déjà marqué dès la fin janvier. En deuxième décade de février, un puissant anticyclone scandinave s'oppose à une dépression sur l'Italie, permettant la mise en place d'un véritable Moscou-Paris. Ce dernier advecte un air glacial vers la France et la masse d'air à 850 hPa (vers 1500 mètres d'altitude) atteint les -20°C dans l'est du pays, des niveaux rares ! Le pic de froid survient entre les 11 et 15 février 1929 avec des températures qui restent remarquablement basses de nuit comme de jour. Température de la masse d'air vers 1500m le 13 février 1929 - réanalyse via meteociel.fr À Strasbourg, la moyenne des températures minimales sur l'ensemble du mois affiche -13°C et celle des maximales -3°C ! Cela correspond à un déficit mensuel de -11,5°C aux normales climatologiques modernes ! On enregistre pas moins de 5 nuits entre -20 et -22°C et les températures maximales plafonnent entre -12 et -15°C du 11 au 14 février 1929 ! C'est en Auvergne qu'il fait le plus froid. Le thermomètre chute jusqu'à -30°C en plaine de Limagne, dans la région de Clermont-Ferrand ! Températures minimales et maximales mesurées à Strasbourg (67) en février 1929 - infoclimat.fr Les fleuves pris dans la glace Avec un froid si intense et qui a débuté dès la dernière semaine de janvier 1929, de nombreuses rivières de France sont entièrement prises par les glaces. La Somme est entièrement gelée à Amiens, tout comme la Meuse, l'Aisne à Rethel, l'Yonne et la Seine en amont de Montereau, de nombreux tronçons de la Loire et une bonne partie du Rhône. Les régions méditerranéennes ne sont pas épargnées par cette vague de froid et certaines villes côtières subissent des chutes de neige. Le Rhône est d'ailleurs partiellement pris dans la glace jusqu'à Arles dans les Bouches-du-Rhône ! Le Rhône partiellement gelé à Arles (13) en février 1929 - Chronique Météo Villes Cette vague de froid rend la vie quotidienne particulièrement difficile. À la campagne, l'eau courante n'étant généralement pas encore installée à cette époque, il faut aller chercher l'eau aux fontaines mais la plupart ne fonctionnent plus ! On fait alors fondre des blocs de glace ou de la neige pour bénéficier de l'eau. Il faut dire qu'elle était abondante dans certaines régions. On relevait entre 10 et 20 cm de neige de la Bretagne au Lyonnais (10 cm à Angers, 17 cm à Clermont-Ferrand). La Saône gelée se traverse à la marche à Chalon-sur-Saône (71) en février 1929 - Chronique Météo Villes À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Vague de froid du mois de février 1917 à Paris : Même les chevaux ne résistent pas ! archives meteo-paris.com Les deux derniers hivers de la Première Guerre mondiale sont particulièrement rudes en France, déjà en partie dévastée par trois années de combats. Entre le 20 janvier et le 15 février 1917, une vague de froid exceptionnelle frappe surtout le Nord et l’Est du pays, atteignant son apogée au début de février avec des températures extrêmement glaciales. Dans « Le Monde Illustré » du 3 février 1917, on note que cet hiver renoue avec la tradition, car selon le journal « les grands hivers d’antan deviennent de plus en plus rares… ». Il est vrai que la courbe de l’évolution des températures hivernales en France indique un réchauffement du début du siècle à l’arrivée de la Première Guerre mondiale. Des conditions météo insoutenables pour le moral des troupes Les sols gelés de l’Aisne, paradoxalement, permettent des mouvements de troupes impossibles sur sols boueux en temps normal. Cependant, l’armée française souffre terriblement du froid, étant nettement sous-équipée pour y résister, contrairement à l’armée allemande. Les régiments ne disposent que de quelques peaux de bête, et certains tirailleurs algériens sont même chaussés de souliers découverts et vêtus de culottes courtes. Ces conditions difficiles affectent grandement le moral des troupes. La relève sous la neige durant la guerre - début 1917 - archives meteo-paris.com Jusqu’à -26°C dans les plaines et vallées de l'Est de la France ! Le froid atteint son point culminant au tout début du mois de février avec des températures glaciales : -26 °C à Bonneville, -23 °C à Commercy, -22 °C à Montbrison, -20 °C à Grenoble, -18 °C à Lyon, -17 °C à Alençon et Clermont-Ferrand, et -15,5 °C à Paris. Les dix premiers jours de février sont comparés à la situation de février 1895. A Paris, le déneigement des voies de circulation est très compliqué en raison du manque de main-d’œuvre. Les femmes sont alors réquisitionnées. Février 1917 - archives meteo-paris.com Les rivières gèlent peu à peu Les rivières de l’Est commencent à geler le 24 janvier, tandis que celles du Nord, y compris celles de la région parisienne, le sont dans les derniers jours de janvier, un phénomène inédit depuis 1895. La navigation devient impossible sur les canaux puis sur la Seine. Parallèlement, la forte demande en charbon engendre d’importantes difficultés d’approvisionnement à Paris, comme à Londres. Malgré l’utilisation de quelques brise-glace et la construction de barrages pour retenir les glaces près de Rouen, les péniches restent bloquées entre Rouen et Paris. Un service spécial de transports automobiles est alors mis en place. Rouen - 7 février 1917 - archives meteo-paris.com Le prix du charbon s'envole !! Les files d’attente pour acheter du charbon s’allongent, et les prix s’envolent. Même les bourgeoises des beaux quartiers doivent attendre des heures, ce qui ne manque pas de provoquer quelques grincements de dents, tant figurés que réels. La pénurie de charbon, alors que de nombreuses machines en dépendaient à l’époque, a de plus en plus d’impact sur l’activité économique. Des lignes de tramway sont interrompues, des usines ferment leurs portes, et les blanchisseries, chauffées au coke, cessent progressivement leurs activités. Certains journaux s’indignent même que les prisonniers allemands soient mieux chauffés que les Français. La rareté du charbon entraîne une flambée des prix du bois de chauffage dans les grandes villes. Il est alors vendu au kilo, après avoir été scié et pesé sur des balances à main. Par ailleurs, les fourrures en peau de lapin deviennent très bon marché. Déchargement par la main-d'œuvre après l'immobilisation par le gel - vague de froid 1917 - archives meteo-paris.com A lire également : >>> Notre chronique météo de l'année 1917 >>> Le dernier hiver de la guerre fut terriblement froid en France... >>> Froid polaire pour fin janvier et jusqu'à 1 m de neige à Carcassonne ! >>> Tous nos articles (en moyenne, deux par jour) >>> Notre compte Twitter (référence pour les médias) Auteur : Guillaume Séchet

Source: Météo-Paris Certaines plages de la façade Atlantique sont bondées comme en plein été à la fin du mois de février 1990 - Archives Météo-Villes Un changement radical de temps Après trois semaines particulièrement agitées avec une succession de perturbations apportant notamment d'importantes quantités de neige en montagne, des pluies abondantes sur de nombreuses régions et même une puissante tempête sur le nord-ouest de la France, la situation change radicalement sur la France pour la dernière décade de février 1990. Une poussée anticyclonique se met en effet en place sur le pays à partir du 20 février, apportant le retour d'un temps calme et sec mais également un net regain de douceur avec la bascule du flux au sud/sud-ouest en altitude. Situation atmosphérique du 22 février 1990 sur l'Europe – Wetterzentrale Si les semaines précédentes s'était déjà montrées assez douces sous l'influence du flux océanique perturbé, les températures ont pris une toute autre mesure au début de la dernière décade de février 1990. De la chaleur en février ! À partir du 20 février donc, les températures s'envolent sur la totalité du pays. Une véritable vague de chaleur hivernale se met en effet en place sur la France sous ce puissant flux de sud/sud-ouest en altitude. Dès le 20 février, on atteint 20°C jusque dans les Hauts-de-France, 18,5°C à Paris et 19°C à Strasbourg, mais c'est notamment les journées des 23 et 24 février qui se montrent exceptionnellement douces sur la totalité du pays et même chaudes sur certaines régions. Le 23 février, le seuil de chaleur est régulièrement dépassé sur le sud-ouest de la France avec par exemple 25,7°C à Biarritz, 25,9 à Mont-de-Marsan et même 27,2°C à Dax ! Le 24 février, les 26/27°C sont dépassés sur le sud de l'Aquitaine avec jusqu'à 28°C à Peyrehorade (40) et même 28,1°C à Agnos (64). On relève également jusqu'à 25°C à Bordeaux, 23,5°C à Clermont-Ferrand, 22,6°C à Bourges, 22°C à Mulhouse, 21°C à Orléans et 20°C à Paris. Certaines stations du centre de la France atteignent également le seuil de chaleur. De très nombreux records sont observés. Températures maximales relevées sur la France le 24 février 1990 – archive Météo-Villes Cette vague de douceur/chaleur exceptionnelle pour la période prendra fin sur la majorité du pays dès le lendemain avec le retour d'air océanique moins doux. Seules les régions allant du Massif Central au nord-est conserveront des températures très douces avec 22,3°C à Saint-Étienne, 21,8°C à Colmar, 19,9°C à Vichy. Deux tempêtes toucheront ensuite la France entre le 26 et le 28 février. Le mois de février 1990 s'est montré dans l'ensemble exceptionnellement doux. La température moyenne nationale durant ce mois a en effet dépassé la normale 1981/2010 de + 4°C, ce qui n'a jamais été égalé jusqu'à aujourd'hui. Seul le mois de févier 2024 s'est rapproché de cette température moyenne mensuelle exceptionnelle avec une anomalie de + 3,6°C à l'échelle du pays. Anomalies de températures en février entre 1967 et 2016 sur la France – Météo-France À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris 80 cm de neige et route paralysée dans la région de Saint-Maximin (Var) le 28 février 2001 - Chronique Météo Villes La fin du mois de février 2001 avait été marquée par des chutes de neige exceptionnelles dans le sud-est de la France. Il était tombé jusqu'à 80 cm dans le Var, causant une véritable paralysie sur les routes ! Retour sur cet épisode marquant. Tempête de neige au sud-est Fin février 2001, la France subit sa première véritable offensive hivernale depuis novembre 1999 ! Une dépression circule sur le bassin parisien et advecte de l'air froid sur le pays. Dans le même temps, un minimum dépressionnaire secondaire se creuse dans le golfe de Gênes, ce qui entraîne un retour d'est responsable de fortes précipitations persistantes sur le nord de l'Italie et le sud-est de la France. L'isothermie se met en place et il se met à neiger en plaine sur la Provence, particulièrement durant la nuit du 27 au 28 février 2001. Situation météo en Europe le mercredi 28 février 2001 - réanalyse via meteociel.fr Il neige alors dans tout le sud-est de la France mais les quantités sont surtout remarquables sur la Provence ainsi qu'en Ardèche. Le mercredi 28 février 2001, on mesure jusqu'à 80 centimètres de neige au sol à Saint-Maximin dans le Var, 65 cm à Sault dans le Vaucluse ou encore et 52 cm à Régusse (Var) ! De tels cumuls sont remarquables pour ces régions et la vie quotidienne s'en trouve particulièrement affectée. Manteau de neige remarquable à Saint-Maximin (83) le 28 février 2001 - Chronique Météo Villes Une véritable pagaille sur les routes ! Avec de telles quantités de neige dans une région si peu habituée à ce phénomène, circuler devient presque mission impossible dans certains secteurs ! Les axes secondaires sont rendus impraticables car recouverts par plusieurs dizaines de centimètres d'une neige lourde et collante ! Cette neige engendre également des dégâts et de nombreuses coupures d'électricité. Plus de 100.000 foyers sont privés de courant le 28 février 2001 ! Route ensevelie sous un épais manteau neigeux à Signes dans le Var le 28 février 2001 - Chronique Météo Villes La tempête de neige ayant frappé en pleine semaine, entre le mardi 27 et le mercredi 28 février 2001, beaucoup de travailleurs et de chauffeurs routiers se retrouvent coincés sur la route. Les grands axes ne sont pas épargnés. L'autoroute A8 est notamment paralysée dans le Var et plusieurs milliers de personnes deviennent des naufragés de la route, ce qui conduira rapidement à une polémique sur le manque de préparation face à un tel épisode. Des milliers de naufragés sur les routes de Provence le 28 février 2001 - Chronique Météo Villes À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Anomalies thermiques prévues entre le 23 février et le 2 mars 2026 - modèle ECMWF Si les températures ont déjà connu une légère hausse ces derniers jours, ce n’est rien en comparaison de ce qui nous attend cette semaine. Le courant va en effet se diriger vers le sud, permettant à l’air chaud en provenance du Maroc de se propager directement sur la France. La Belgique, l’Allemagne et la Suisse seront également touchées par cette vague de douceur en Europe occidentale. Jusqu'à 12°C au-dessus de la normale ! Tout au long de la semaine, les températures seront largement supérieures aux normales saisonnières pour une fin février, notamment entre mardi et mercredi, lorsque le soleil illuminera les trois quarts du pays. Les écarts à la normale pourraient atteindre sept à dix degrés mardi, et même dix à douze degrés mercredi ! Anomalies de températures maximales prévues mardi et mercredi Plus de 15° presque partout durant quatre jours Entre lundi et jeudi, les températures dépasseront les quinze degrés Celsius l’après-midi sur presque tout le territoire français. La journée la plus douce, voire localement chaude, sera probablement celle de mercredi, car la grande douceur qui aura déjà touché le quart sud-ouest la veille remontera vers les régions du Nord. Températures maximales prévues entre lundi 23 et jeudi 26 février selon METEO-VILLES.COM Peut-être quelques records mensuels battus ? Il sera difficile d’atteindre les records mensuels de douceur pour un mois de février, qui s’élèvent à près de trente degrés dans l’extrême Sud-Ouest et à vingt à vingt-trois degrés sur la plupart des régions. Néanmoins, à Paris, il est possible de se rapprocher des records pour une fin février. Le record pour la troisième décade de février est de 21,4 degrés en 1960. Pour un 24 février, le record est de 20,3 degrés en 1990, et pour un 25 février, il a fait jusqu’à 17,9 degrés à Paris Montsouris en 2019. Records de températures maximales pour un mois de février - Meteociel Pour une fin février, les années de référence sont donc 1960, 1990 et 2019. Pour la moitié sud, on peut également citer 2020 et 2012. Les quais de Seine à la fin du mois de février 2019 avec des températures autour de 18°C à l'ombre à Paris - archives meteo-paris.com >>> Soleil et douceur favorisent floraisons et pollens >>> Douceur exceptionnelle battant des records météo fin février >>> Douceur : comme un air de printemps >>> Douceur exceptionnelle entre mardi et mercredi Auteur : Guillaume Séchet

Source: Météo-Paris La décrue est en vue sur la France avec le retour de conditions bien plus sèches - Celle-ci s'annonce néanmoins lente dans certains secteurs. Images impressionnantes de la crue de la Charente à Saintes ce mercredi 18 février 2026, dont le niveau va continuer de grimper cette nuit et demain. (photos via EPTB Charente) La décrue s'amorce en cette fin de semaine Le temps se montre exceptionnellement perturbé et humide en France depuis maintenant plusieurs semaines. En conséquences, les crues sont nombreuses et parfois très importantes à travers le pays, notamment sur l'ouest et le sud-ouest de la France. Plusieurs tronçons sont d'ailleurs toujours placés en vigilance rouge par Vigicrues ce vendredi 20 février : - Les basses vallées angevines - La Loire aval - La Loire saumuroise - La Charente aval Sur ces secteurs, les niveaux des cours d'eau continuent d'augmenter en cette fin de semaine avant un pic de crue attendu ce week-end. À Saintes par exemple, la Charente devrait atteindre un pic dimanche 22 février autour de 6.60 m (record de 6.84 m en décembre 1982). Évolution du niveau de la Charente à Saintes du 8 au 22 février 2026 – Vigicrues Ce pic devrait être suivi d'une lente décrue sur ce secteur, comme sur la majorité des cours d'eau de l'ouest de la France. En effet, une poussée anticyclonique est observée dès ce vendredi sur la France, ce qui permettra le retour d'un temps plus calme et sec au moins jusqu'en milieu voire fin de semaine prochaine. Cumuls de précipitations attendus jusqu'au vendredi 27 février 2026 sur la France – Modèle GFS via meteociel Ce retour au sec devrait donc permettre aux niveaux des cours d'eau d'entamer une baisse plus ou moins marquée dès ce week-end et ce durant plusieurs jours, une bonne nouvelle pour les régions sinistrées. Une décrue qui s'annonce lente dans certains secteurs Toutefois, il ne faut pas s'attendre à un retour à la normal dès la semaine prochaine. La décrue s'annonce en effet lente à très lente sur la majorité des cours d'eau français. Il faut en effet attendre que toute l'eau des bassins versants s'évacue avant que les fleuves et rivières retrouvent des niveaux plus normaux, ce qui s'annonce long sachant que les sols sont complètement saturés d'eau sur la quasi-totalité du pays. Il est donc logique d'observer un temps de retard entre l'arrêt des pluies et le début de la décrue, le temps que la pluie tombée en amont des cours d'eau se propage en aval. Schéma d'explication d'un bassin versant – METEO-EXTREME À cela s'ajoute la fonte plus ou moins marquée du manteau neigeux attendue dans les prochains jours sur les reliefs. Cette période plus calme et sèche devrait en effet s'accompagner d'un regain de douceur printanière sur notre pays, que ce soit en basses couches mais également en montagne. Cette douceur devrait engendrer un début de fonte du manteau neigeux parfois exceptionnel présent sur nos reliefs. On relevait en effet souvent plus de 250 à 350 cm de neige en haute montagne du côté des Alpes ce 20 février, parfois 4 mètres du côté de l'Isère. Les Pyrénées sont également très enneigées avec en général 250 à 280 cm sur les sommets de la région. Ainsi, l'eau de fonte devrait donc de nouveau alimenter les cours d'eau et ainsi maintenir des niveaux assez hauts malgré l'arrêt des précipitations. La couche de neige dépasse les 250cm en haute montagne dans les Pyrénées en cette fin de semaine comme ici au col du Portalet - Via Twitter @CyNPirineos Enfin, il est important de noter que certains scénarios envisagent déjà le retour de l'influence océanique perturbée pour la fin de semaine prochaine avec des pluies de nouveau généralisées et des perturbations successives, ce qui pourrait engendrer une nouvelle hausse du niveau des cours d'eau. Cette tendance reste néanmoins incertaine et sera à confirmer dans les prochains jours. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris La douceur couplée à l'humidité importante de ce mois de février permettent à la végétation de redémarrer très fortement et précocement. Par conséquent, les pollens se répandant et les allergiques subissent déjà leurs effets néfastes. Pic de douceur après les pluies : explosion des végétaux Les conditions météorologiques récentes et à venir réunissent tous les ingrédients pour l'explosion de la végétation. En effet, une grande douceur s'installe sur la France et va s'accentuer ces prochains jours avec un pic durant les mardi 24 et mercredi 25 février 2026. Les 20°C pourront être atteints jusque dans les régions du nord et on prévoit les premiers 25°C de la saison dans le sud de l'Aquitaine, le tout avec un beau soleil ! Avec des sols très humides suite aux pluies abondantes des dernières semaines et des températures dignes d'avril, la végétation croît très rapidement et les pollens se répandent. Températures maximales prévues les mardi 23 et mercredi 24 février 2026 - Météo Villes Par conséquent, les pollens font leur retour en force et les risques allergiques seront élevés en France durant cette semaine aux airs printaniers. Ce mardi 23 février 2026, le risque d'allergies sera d'un niveau jugé "élevé" par Atmo-France sur la majeure partie des régions, parfois un peu moindre dans certains secteurs du sud-ouest. Cette situation se répètera mercredi et jeudi, avant des risques moins élevés vendredi en raison du passage d'un front pluvieux. Carte du risque allergique valable pour le mardi 24 février 2026 - Atmo-France Cyprès et aulne : les principales menaces Si vous êtes allergiques, il est donc vivement recommandé de reprendre votre traitement ou de prendre rendez-vous chez votre médecin et/ou allergologue. Les personnes allergiques aux cyprès sont particulièrement concernées puisque ce pollen représente une menace importante cette semaine avec des concentrations très élevées dans les régions méditerranéennes, élevées dans le sud-ouest et modérées sur les autres régions. Le pollen de cyprès possède un pouvoir allergisant très important et génère souvent des rhino-conjonctivites. Les pollens de cyprès représentent une très forte menace allergique - photo d'illustration L'autre pollen qui pose problème aux quatre coins du pays est celui de l'aulne. Il est dégagé par ce que l'on appelle des chatons (photo ci-dessous), similaires à ceux du noisetier. Ce pollen est plus discret visuellement mais n'en demeure pas moins redoutable puisqu'il génère des rhino-conjonctivites et des crises d'asthme chez les sujets allergiques. Sa concentration est actuellement élevé et induit un risque de réaction allergique important dans la plupart des régions françaises. Les pollens d'aulne sont libérés par ce que l'on appelle des "chatons" - photo d'illustration Avec le retour du soleil et de températures particulièrement douces après de longues semaines de mauvais temps, beaucoup vont passer de longues heures en extérieur. Il convient donc d'être particulièrement vigilants face aux pollens. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris A Ville d'Avray (ouest de Paris), la couche de neige atteint 55 cm ! photo d'archive meteo-paris.com Tempête de neige sur le nord de la France Dès la fin février 1946, l'anticyclone s'érige sur l'Atlantique et le Groenland, entraînant un décrochage d'air polaire qui envahit tout le nord de l'Europe et plonge en Mer du Nord. Au sud de cet air froid, une dépression se creuse vers le Portugal puis migre au dessus de la France où elle transite durant plusieurs jours début mars 1946. À l'avant de la dépression, l'air doux englobe le sud de l'hexagone tandis qu'elle rabat l'air froid sur les régions du nord. Un important conflit de masses d'air se produit avec en son centre un épisode neigeux actif et persistant des Pays de la Loire à la Belgique, frappant particulièrement l'Île-de-France ! Situation météo en Europe au 1er mars 1946 - Météo Villes On relève 40 cm de neige au sol à Paris, une épaisseur jamais mesurée depuis le début des relevés météo à Paris-Montsouris ! 80 ans plus tard, cette mesure n'a toujours pas été égalée. La presse de l'époque indique qu'il faut sans doute remonter à l'hiver 1829-1830 pour trouver trace d'une couche de neige semblable dans la région parisienne. La couche fut même encore plus importante à l'ouest de Paris, atteignant jusqu'à 55 cm dans les Yvelines ! Les photos d'époque montrent les terrasses parisiennes ensevelies sous la neige ! Paris sous 40 cm de neige, le 2 mars 1946 ! photo d'archive meteo-paris.com Paris paralysée par 40 cm de neige ! Début mars 1946, Paris s'était alors transformée en une véritable station de sports d'hiver ! De nombreux habitaient avaient chaussé les skis pour circuler dans les rues, dévalant les marches du Trocadéro jusqu'au pied de la Tour Eiffel ou encore les pentes de la butte Montmartre ! Six mois après la fin de la seconde guerre mondiale, les parisiens profitent pleinement de cet épisode exceptionnel. Pour autant, cette neige abondante cause aussi de nombreux problèmes, paralysant la circulation. Skieur descendant la butte Montmartre à Paris début mars 1946 - Chronique Météo Villes L’épaisseur de neige est telle que des toits et des verrières s’effondrent dans plusieurs quartiers de la capitale et que les denrées alimentaires peinent à être acheminées dans la région où les rayons se vident. Le trafic ferroviaire s'en trouve également paralysé. Les rues des villes sont difficilement praticables et de nombreuses chutes surviennent. Il faudra plusieurs jours pour assister à un retour progressif à la normale des activités. Neige abondante au métro Brochant de Paris début mars 1946 - Chronique Météo Villes Il s'agit, aujourd'hui encore, de l'épisode neigeux le plus important sur Paris depuis le début des relevés météorologiques. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik


International News / Journaux internationaux

Source: New York Times World Judges at the International Criminal Court have heard starkly different interpretations this week of the words of former President Rodrigo Duterte of the Philippines.

Source: The Guardian World IAG reports record operating profits on margins of more than 15% at British Airways and sister airline Iberia British Airways’ owner, International Airlines Group, has announced a sharp rise in annual profits to almost £4bn despite a slight fall in passenger numbers in 2025. Pre-tax profits across IAG increased by 20% to €4.5bn (£3.9bn), with record operating profits on margins of more than 15% at BA and its sister airline Iberia. Continue reading...

Source: BBC World BBC international correspondent Quentin Sommerville travelled to Culiacán in northern Sinaloa state following an explosion in violence.


Raspberry Pi

Source: Framboise 314 Derrière Macé Robotics, Nicolas mêle réparation électronique au composant et conception de cartes pour des besoins professionnels, tout en développant des robots mobiles pour l’éducation et la recherche. On trouve notamment des projets de robots basés sur Raspberry Pi et Raspberry Pi Pico (MRPi1, MR-Pico), accompagnés de contenus et documentations. Dans ce contexte, il organise […] Cet article Concours Mace Robotics : un Raspberry Pi 5 (et un Pico 2W) à gagner ! a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Les 14 et 15 février 2026, je vous donne rendez-vous à Vitré pour le salon Tech Inn’Vitré (Usages numériques), organisé par Vitré Communauté et Makeme. Deux jours pour découvrir des usages concrets du numérique, tester, manipuler… et surtout échanger “en vrai”. Tech Inn’Vitré 2026 : rendez-vous les 14 & 15 février au Centre culturel de […] Cet article Retrouvez nous sur Tech Inn’Vitré les 14 et 15 février 2026 a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 La SunFounder Fusion HAT+ ressemble à un simple HAT pour Raspberry Pi… jusqu’au moment où vous réalisez que c’est plutôt un couteau suisse pour robot “assisté par IA”. Elle ne “fait” pas l’IA toute seule : les neurones restent sur le Raspberry Pi (un Pi 5 dans mon cas), mais la carte apporte le muscle […] Cet article SunFounder Fusion HAT+ : alimentation 2×18650, moteurs et contrôle “IA-ready” pour Raspberry Pi a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Dans cette seconde partie, le Raspberry Pi 5 passe à l’action avec la vidéo temps réel accélérée par Hailo-10H. Détection de personnes, cadrage dynamique, pose squelette et reconnaissance des mains : on enchaîne les modèles concrets. L’objectif est d’évaluer les performances réelles, les limites, et les bons compromis en situation réelle. Ici, pas de cloud […] Cet article Raspberry Pi AI HAT+ 2 : vision par ordinateur en vidéo avec Hailo-10H (Partie 2) a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Avec la carte Raspberry Pi AI HAT+ 2, la Fondation Raspberry Pi introduit une carte HAT+ intégrant l’accélérateur Hailo-10H et 8 Go de mémoire dédiée, conçue exclusivement pour le Raspberry Pi 5. Connectée en PCIe Gen 3, elle vise l’exécution locale de modèles d’IA sans dépendre du cloud. Dans ce premier article, je vous présente […] Cet article Raspberry Pi AI HAT+ 2 : présentation matérielle et installation sur Raspberry Pi 5 a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Aujourd’hui, je vous propose de découvrir Pimmich, un cadre photo connecté open source basé sur Raspberry Pi, pensé pour afficher vos souvenirs sans cloud ni abonnement, en restant 100% local. Avec les récents changements côté Google Photos, beaucoup d’entre vous ont dû revoir leurs habitudes… et Aurélien a eu le bon réflexe : s’appuyer sur […] Cet article Pimmich – Un cadre photo connecté open source basé sur Raspberry Pi a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 L’application Google Earth n’est plus réellement maintenue sous Linux, et elle n’existe plus du tout en version native pour les architectures ARM, comme celles des Raspberry Pi. La dernière version officielle pour Linux date de 2020, et son installation sur un Pi (ARM) est aujourd’hui vouée à l’échec. En pratique, pour utiliser Google Earth sous […] Cet article Utiliser Google Earth sur Raspberry Pi : la solution Web qui fonctionne a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 La question du pilotage des LEDs WS2812B sur Raspberry Pi 5 a récemment été soulevée par Victor lors d’un échange sur un réseau social. Le Raspberry Pi 5 introduit une nouvelle architecture matérielle qui complique le pilotage des LEDs WS2812B avec les bibliothèques historiques. Les solutions basées sur le PWM ou le DMA montrent rapidement […] Cet article Raspberry Pi 5 : piloter des LEDs WS2812B de manière fiable avec le bus SPI a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Toms Hardware Raspberry Pi VEEB Projects has put together a cool transparent Raspberry Pi display using a glass dome and a program that replicates the Pepper's Ghost effect.

Source: Toms Hardware Raspberry Pi Abe's Projects has put together a custom mini PC using two Raspberry Pi Picos featuring a touchscreen, custom apps, and a built in keyboard.

Source: RaspberryTips.fr Vous n’avez pas besoin de capteurs, d’écrans ni de gadgets supplémentaires pour créer quelque chose de génial avec votre Raspberry Pi. En fait, de nombreux projets parmi les plus utiles et gratifiants peuvent être réalisés avec rien d’autre que votre Pi, une carte microSD et une alimentation. Le Raspberry Pi peut être utilisé comme serveur...

Source: Raspberry Pi We’ve updated our pages, forms, and CAPTCHA infrastructure on raspberrypi.com to improve accessibility for screen reader users.

Source: Toms Hardware Raspberry Pi Powered by the Raspberry Pi Zero 2W, Jeff Merrick's slab of 1970 / 1980s aesthetic screams the "charm" of the worn and broken Alien universe that belies the powerful single board computer within.

Source: Toms Hardware Raspberry Pi Raspberry Pis latest AI accessory brings a more powerful Hailo NPU, capable of LLMs and image inference, but the price tag is a key deciding factor.

Source: Toms Hardware Raspberry Pi The price of a Raspberry Pi now has parity with Intel N100 mini PCs at just over $200, with flash memory price spikes continuing to push prices up across the board.

Source: Toms Hardware Raspberry Pi The invitation to Mayor-elect Mamdani's inauguration lists Raspberry Pi and Flipper Zero as prohibited items but does not provide a reason.

Source: Toms Hardware Raspberry Pi This Raspberry Pi project captures Wi-Fi data and then blasts it out as sound to make it feel like you're connecting via a dial-up modem.

Source: Toms Hardware Raspberry Pi Raspberry Pi has released an updated version of the Raspberry Pi 500 and this time the omitted NVMe storage is present, as is an RGB mechanical keyboard.

Source: Toms Hardware Raspberry Pi Raspberry Pi releases a smaller model of its updated touch display. This time with $20 off the price but the same display as the larger model.

Source: Toms Hardware Raspberry Pi Argon40’s new Raspberry Pi Compute Module 5-powered laptop has it all, but the price makes it a considered purchase.

Source: Toms Hardware Raspberry Pi GamerCard is a retro gaming handheld so portable than it's literally the size of a gift card, so you can now casually spend $170 at checkout.

Source: Toms Hardware Raspberry Pi Spacerower is using a Raspberry Pi Zero to power this custom 3D-printed camera that instantly prints out photos using thermal paper.

Source: Toms Hardware Raspberry Pi Goblinhan Yıkan has created a Raspberry Pi Pico-powered fight stick that has extra buttons for throwing random combos while playing fighting games.

Source: Toms Hardware Raspberry Pi Dan McCreary shows off how to create your own FFT sound spectrum analyzer using our favorite microcontroller, the Raspberry Pi Pico 2.

Source: Toms Hardware Raspberry Pi Raspberry Pi's new $25 PoE+ Injector bring power over Ethernet for the Raspberry Pi 3B+ and 4 via existing PoE HATs. The Raspberry Pi has to wait for the PoE+ HAT+ which has been in development since late 2023.

Source: Toms Hardware Raspberry Pi André Esser is using a Raspberry Pi to power this ASCII camera project that he recently created for Pi Jam, celebrating Pi day.

Source: Toms Hardware Raspberry Pi Maker Jackw01 and Soaporsalad have put together a cool Raspberry Pi handheld featuring a Raspberry Pi 2 W that's small enough to fit in an Altoids tin.

Source: Toms Hardware Raspberry Pi John Park has created a cool Raspberry Pi-powered wall arcade that features multiple matrix panels as its main display.

Source: Toms Hardware Raspberry Pi NeverCode has created a Raspberry Pi Pico smart clock and shared lots of details on how you can recreate it for yourself at home.

Source: Toms Hardware Raspberry Pi Pollux Labs is using a Raspberry Pi to power this rotary phone project that integrates Chat GPT and remembers previous conversations.

Source: Toms Hardware Raspberry Pi Raspberry Pi has announced the general availability of the RP2350 A and B, the SoC that powers the Raspberry Pi Pico 2 which features both an Arm and RISC-V CPU

Source: Toms Hardware Raspberry Pi ClockworkPi has released a cool Raspberry Pi Pico kit that lets you create a calculator suitable for handling your math homework or playing games.

Source: Toms Hardware Raspberry Pi Glossyio has created a Raspberry Pi-powered traffic monitor that uses AI to monitor traffic and look for statistics around specific travelers like cyclists and pedestrians.

Source: Toms Hardware Raspberry Pi Maker 3megabytesofhotram is using a Raspberry Pi to power a voice-activated paper towel dispenser that makes it easier than ever to dry your hands.

Source: Toms Hardware Raspberry Pi Tribal2 is using a Raspberry Pi to drive this cool interactive LED world map that integrates with his smart home setup.

Source: Toms Hardware Raspberry Pi Blink twice to control the robot arm

Source: Toms Hardware Raspberry Pi The Civitas Universe has put together a cool Raspberry Pi cyberdeck that scans brains and features a cool cyberpunk theme in a custom 3D-printed case.

Source: Toms Hardware Raspberry Pi Install Windows 11 for Arm on the Raspberry Pi 5 using the simplest installation method that we have ever encountered.

Source: Toms Hardware Raspberry Pi ACCESS GRANTED

Source: Toms Hardware Raspberry Pi Arnov Sharma built a Raspberry Pi Pico studio light from scratch that can be controlled using push buttons to adjust the LEDs with precision.

Source: Toms Hardware Raspberry Pi Yaluke has created a Raspberry Pi Pico-powered protractor that can be used to calculate rotation data for simulating steering wheels when driving.

Source: Toms Hardware Raspberry Pi Arnov Sharma has created a temperature gun from scratch using a Raspberry Pi Pico 2 as the main board.

Source: Toms Hardware Raspberry Pi Arnov Sharma has put together a cool Raspberry Pi-powered handheld console for playing the classic game Snake on a Matrix.

Source: Toms Hardware Raspberry Pi Visible_Turnover3952 has created a Raspberry Pi-powered cat house with luxurious smart home features and automated systems to keep them cozy.

Source: Toms Hardware Raspberry Pi Three new carrier boards for your Compute Module 5 and the older Compute Module 4 which bring Raspberry Pi 5 accessories to the CM5, and PoE before Raspberry Pi releases its version.

Source: Toms Hardware Raspberry Pi Nicholas LaBonte is using a Raspberry Pi to power this custom cyberdeck handheld complete with custom-milled keys and wood finishing.

Source: Toms Hardware Raspberry Pi Tonight-we-ride has put together a cool Raspberry Pi music player with a touchscreen and customizable interface with Winamp.

Source: Toms Hardware Raspberry Pi Coming soon is a Kickstarter that sees the Compute Module 5 inside of a custom designed laptop.

Source: Toms Hardware Raspberry Pi Efren Lopez has created a Raspberry Pi-powered Creeper robot from the Minecraft universe complete with an AI chip and a motorized body.

Source: Toms Hardware Raspberry Pi Aforsberg has created a cool LED matrix display for their 1U server rack that's decked out like the WOPR computer from the 1983 movie War Games.

Source: Toms Hardware Raspberry Pi The Raspberry Pi 2040 now officially supports 200 MHz operation, thanks to the latest Pico-SDK release.

Source: Toms Hardware Raspberry Pi Bicapitate has created a custom Raspberry Pi-powered 3D-printed map of Manhattan that displays the location of subway trains in real time using LEDs and optical fiber.

Source: Raspberry Pi Spy Displaying the pinout of a Raspberry Pi Pico is possible using my “picopins” script. The script displays the pinout in a colour coded format showing the location of power, ground and GPIO pins. I find it useful if I’m coding Pico projects on my laptop or Pi 400 and need to check the location of [...]

Source: Raspberry Pi Spy This guide explains how to disable auto-login on Raspberry Pi OS. By default when you install the Raspberry Pi OS with the desktop it will auto-login when you power-up the Pi. This is really convenient for lots of projects as it gets you straight to the desktop. If you are using your Pi as a [...]


IoT - Internet of Things / IdO - Internet des Objets

Source: Home Assistant Community Forum (Latest) I’m wondering how people with HomeAssistant are managing the IP address for all of their IOT devices? The options that I’m aware of are: Static IP Addresses - You manually set a static IP address on each device. This doesn’t really scale, and is vulnerable to human error (e.g. address conflict on two devices) DHCP - The devices grab IP addresses from a pool - however, the address may change between reboots, lease timeouts etc. DHCP with static reservations - On your DHCP server, you reserver specific IP addresses from the pool for devices via MAC addresses. It’s a bit of overhead though. Let’s say you have a mix of IOT devices (some of this is planned in my case): ESPHome devices Shelly devices Tuya devices (used via localtuya or tuya-local - still figuring out which is better) WLED light controllers In my case, the router is running VyOS, and it’s a dual-stack IPv4/IPv6 network. VyOS does allow setting static DHCP leases: https://docs.vyos.io/en/1.4/configuration/service/dhcp-server.html#static-mappings However, it’s a bit of a pain having to manage that manually. The Ansible module for VyOS doesn’t seem to support configuring the DHCP server either: https://docs.ansible.com/projects/ansible/latest/collections/vyos/vyos/index.html so you can’t even use that to streamline things, or get it into source control . However, these days - do you actually need stable address for devices? (i.e. with mDNS/Bonjour, and all that?) As in, will HomeAssistant handle it fine, if device IP addresses change underneath it? Or what would you do to handle this? 9 posts - 8 participants Read full topic

Source: Domoticz (Forum News) Hi, I am using Domoticz on a Raspberry Pi. After an update, a login screen appears upon startup. It asks for a username and password. Then a message appears stating they are incorrect, and after three attempts, a screen appears with the message: "congratulations …….the end of the internet". The strange thing is that Domoticz is otherwise working fine; the lights are on. But Domoticz is inaccessible. Software and hardware used: Domoticz running on a Raspberry Pi. What I have already found or tried: I connect to the Raspberry Pi via SSH. I tried to repair Domoticz or completely reinstall it. Nothing helped. Does anyone have a solution to my problem? Statistics: Posted by miguell — Tuesday 24 February 2026 9:46 — Replies 3 — Views 356

Source: Home Assistant (Blog officiel) After an amazing 2025 that saw 12 new Works with Home Assistant partners join the program, it’s now time to say “Hei” to the first partner joining us this year: Heiman. Founded back in 2005, Heiman specialize in smart home security devices, and are bringing an impressive selection of safety-focused sensors and alarms to the program: including the first Matter carbon monoxide alarms to be certified, along with smoke alarms designed for international markets. Keep it local, keep it safe If you’re new to the Works with Home Assistant program, it’s designed to help you identify devices that work brilliantly with Home Assistant, and support the Open Home Foundation’s principles of privacy, choice, and sustainability. These values all pivot around local control, something that’s essential when it comes to home safety. Your smoke and CO alarms need to work when you need them most, regardless of your internet connection or cloud service status (though if you want to check in on your devices while away from home, Home Assistant Cloud provides secure remote access, and your subscription helps fund this very program, among other things!). Our in-house team has thoroughly tested Heiman’s devices to ensure they meet this key requirement, and we’re happy to report they did! But Heiman has gone further still by using the Matter open connectivity standard… Why this matters Matter was launched to be a unifying connectivity type with interoperability at its heart. Instead of being locked into one company’s ecosystem, Matter devices work across Home Assistant, as well as other platforms like Google Home. Heiman’s Matter devices work over Thread, which adds another layer of benefits. Thread is a low-power wireless mesh network protocol that creates resilient connectivity throughout your home, perfect for battery-powered sensors that need reliable communication while staying energy efficient. This is ideal for battery-powered sensors like Heiman’s that need to be energy efficient while maintaining reliable communication. So why does all this matter for safety devices specifically? Well firstly, it’s important to know these smart devices will still work as “dumb” ones, so there’s always a failsafe if you decide to rebuild your Thread network, or start making tweaks. If your sensors integrate locally, it means you can automate basic checks, such as reminders to test an alarm once a month, or notifications of hardware faults. If you want to go even further, your smoke alarm could trigger emergency lighting, your CO detector could shut off your gas fireplace, or your leak sensor could close water valves, all without sending your private data through a third-party server. And this is just the sort of complete, interoperable ecosystem Heiman aims to provide. "Our core goal has always been to enable every family to enjoy a safe and intelligent living experience. Home Assistant, as a world-leading open source smart home platform, has an open and inclusive ecological philosophy and strong compatibility with multi-brand and multi-protocol devices, which are highly consistent with the direction of our product research and development. We deeply understand that only by integrating into an open ecosystem can we break down device barriers and provide users with a truly seamless whole-house smart solution."

  • Leo Xie, Software Engineer Manager at Heiman Working with the community Heiman is showing they’re true to these ambitions. Beyond getting certified, they’re planning to take an active role in the Home Assistant community by participating in discussions, listening to real-world feedback, and continuously optimizing their products based on what users actually need. They’re also sharing their technical expertise in smart home security, collaborating with developers to explore innovative safety scenarios that benefit everyone. Devices Heiman’s commitment to openness and community is also reflected in the devices we’ve certified, which also meet strict safety regulations across the US, Europe, Asia and beyond. Before Heiman joined, we had one Zigbee smoke alarm in the program. Now there are Matter options for multiple regions, plus the first certified carbon monoxide alarms: more choice, more coverage. What devices have been certified? Heiman Smart Smoke Alarm (USA) Heiman Smart Smoke Alarm (EU and China) Heiman Smart Carbon Monoxide Alarm (USA) Heiman Smart Carbon Monoxide Alarm (EU and China) Heiman Motion Sensor Heiman Water Leak Sensor Heiman Humidity and Temperature Sensor Also worth noting: Heiman’s global presence allows them to deliver quality devices at prices that won’t break the bank. Safety sensors and alarms shouldn’t be a luxury, and Heiman’s approach means they don’t have to be. No more guessing games! Accessible pricing is just one way Heiman expands choice for users. We’ve found they also deliver on the other core principles behind the Works with Home Assistant program: local control protects privacy, and open standards ensure sustainability. And that’s the whole point of our certification process: to make it easier for you to spot manufacturers who genuinely commit to these values, taking the guesswork out of building your open home. For full details of all Works with Home Assistant partners, check out our certified device list. Welcome to the program, Heiman, we’re excited to see what the community builds with these devices! Frequently asked questions If I have a device that is not listed under Works with Home Assistant, does this mean it’s not supported? No! It just means that it hasn’t gone through a testing schedule with our team, or doesn’t fit the requirements of the program. It might function perfectly well but be added to the testing schedule in the future. OK, so what’s the point of the Works with program? It highlights the devices we know work well with Home Assistant and the brands that make a long-term commitment to keeping support for these devices going. The certification agreement specifies that brands must continue to support the devices in the program. How were these devices tested? All devices in this list were tested using a standard Home Assistant Green Hub with the Home Assistant Connect ZBT-2 as the Thread Border Router and with our certified Matter integration. Will you be adding more Heiman devices to the program? Why not! We’re thrilled to foster a close relationship with the team at Heiman to work together on any upcoming releases or add in further products that are not yet listed here. We are also chatting with them about some exciting future plans.

Source: Gladys Assistant (Forum) Bonjour, Gladys est juste ce dont j’avais besoin maintenant. J’y suis arrivé via la vidéo de AyLabs. Je viens de prendre ma retraite, j’étais dans l’IT, je faisais des projets de virtualisation, et j’ai déménagé dans ma petite maison dans le sud (je plante le décor aussi pour montrer que rentrer dans le yaml ne me fait pas peur, mais me fait ch… un peu). J’avais juste une box Lidl avec des prises et lampes connectées et un détecteur de mouvement, quelques automatisations, une caméra et autres appareils Xiaomi, un lave-linge Samsung sous SmartThings, rien de bien compliqué. J’avais monté un Proxmox, avec une vm HA. Alors, je ne dis pas que c’est compliqué, mais c’est bien chiant. Et si tu n’as pas envie de passer des heures dessus, c’est vite le b… Et je ne parle pas du WAF… Je pense que HA est un outil magnifique, hyper complet, qui répond à tous les scénarios, mais qui est assez inaccessible au néophyte, au partenaire dans ta maison, ou disons qu’il demande un temps et un effort énorme pour avoir qq chose de présentable et utilisable. Je rentre donc dans Gladys Assistant avec enthousiasme. Gladys est élégant, et même si j’ai eu des galères pour appairer mes appareils Zigbee avec mon dongle Sonoff (ma faute sûrement, car j’ai pas ou mal lu la doc), le support est top et l’interface facile… Bref, je vais voir maintenant intégrer mes appareils Matter Ikea et le reste. 4 messages - 3 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour à tous, Voilà 15 ans que je me suis lancé dans la domotique en commençant avec EEdomus, passant par Jeedom et finissant comme beaucoup sur HA. J’ai 63 ans et je suis retraité de l’informatique, plutôt dans les systèmes et réseaux. Aujourd’hui, je découvre Gladys, je ne l’ai ni installé ni testé, je suis toujours sur HA sur un NUC qui tourne 24/7 et qui fonctionne plutôt pas mal, alors me direz vous, qu’est ce que je viens faire ici ? En fait, je suis toujours à l’affut de la nouveauté, je ne suis pas fondamentalement attaché à HA dons certains points m’agacent un peu, les multiples mises à jour, la programmation en yaml qui est une vraie plaie et beaucoup de choses de ce genre. j’aimerai donc me tourner vers quelque chose de nouveau, non pas que j’en attende une révolution mais un peu plus de sérénité peut être… Seulement voilà, j’utilise plusieurs techno différente, zigbee, zwave, matter, rfx plus des systèmes de caméra différentes, sans compter Philips Hue, Alexa etc. et j’ai un peu peur de sauter le pas sans tout casser et devoir reprendre de zéro. L’un d’entre vous a t’il déjà fait le grand saut et ça s’est globalement passé ? Allez j’arrête mon char et j’attends vos réponse (bienveillantes) avec impatience. Au plaisir. Pascal 4 messages - 4 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) I designed a plugin for Airplane Tracking based on the work of @janpep 'Script for Airplanes.live API'. Installation is quite simple.

  1. mkdir ~/domoticz/plugins/AirPlaneTracker
  2. Copy the code to ~/domoticz/plugins/AirPlaneTracker/plugin.py
  3. do a 'sudo systemctl restart domoticz'
  4. Goto hardware and select the plugin 'Airplane Tracker', give it a name (adapt settings) and click on 'Add'
  5. Goto the 'Utility'tab, find the Airplanes - Counter, click edit and change the counter Type from Energy to Custom
  6. Off you go The names of the sensors are made up of 2 parts.
  • The name you give to the plugin @install
  • the name of the sensor Example: I gave the plugin the name Vliegtuigen so the name of the Tracker sensor is Vliegtuigen - Tracker The sensors are: Tracker - displays the planes flying over your head Counter - counts the planes flying over your head Types - Counts the types of planes flying over your head When you click on the CallSign of an plane like KLM1844 or RYR9HN or BAW979 in de Tracker sensor, a new Airplanes.live tab opens in your browser and shows the plane on the map. This is a one day score within 9 miles from my house. Code: # Airplane Tracker Plugin for Domoticz# Author: Hein""" Airplane Tracker Monitor live air traffic around your location using the airplanes.live API. Features Tracker: Shows the last 3 aircraft detected with details (altitude, speed, direction) Counter: Cumulative count of all aircraft seen (resets daily by Domoticz) Types: Daily summary of aircraft types detected Configuration Radius: Detection radius in miles around your Domoticz location API Interval: Time between API calls in seconds (default: 60). Log Unknown Types: Logs unclassified types to /unknown_aircraft_types.log. Log Level: Controls logging verbosity. Important - Counter Device: After creation, go to Devices, click Edit on the Counter, and change Type to Custom. """import Domoticzimport urllib.request, json, socket, time, math, osfrom datetime import datetimeclass BasePlugin: def init(self): self.last_seen_ts = {} self.tracker_buffer = {} self.type_counts = {} self.logged_unknowns = set() self.today = datetime.now().strftime('%Y-%m-%d') self.total_count = 0 self.home_dir = os.path.expanduser("") self.last_api_call = 0 def should_log(self, level): log_level = Parameters.get("Mode6", "Error") if log_level == "Info": return True elif log_level == "Status": return level in ["Status", "Error"] else: return level == "Error" def classify_aircraft(self, t, desc): t = t.upper() if t else "" desc = desc.upper() if desc else "" # 1. Helicopters (Priority) if "HELICOPTER" in desc or "ROTORCRAFT" in desc or t in ["EC35", "EC45", "H135", "H145", "AS32", "EH10", "NH90", "CH47"]: return ("Helicopters", "helicopter") # 2. Special / Military / Large Transport if "RIVET" in desc or t == "R135": return ("Boeing RC-135 (Radar)", "known") if "AWACS" in desc or t == "E3TF": return ("Boeing E-3 AWACS", "known") if "GLOBEMASTER" in desc or t == "C17": return ("Boeing C-17 Globemaster", "known") if "ATLAS" in desc or t == "A400": return ("Airbus A400M Atlas", "known") if "HERCULES" in desc or t == "C130": return ("Lockheed C-130 Hercules", "known") if "STRATOTANKER" in desc or t == "K35R": return ("Boeing KC-135 Tanker", "known") # Specifiek voor A330 tankers (Voyager/MRTT) op basis van omschrijving if "VOYAGER" in desc or "MRTT" in desc: return ("Airbus A330 Tanker/Transport", "known") if "BELUGA" in desc or t in {"A3ST", "A337"}: return ("Airbus Beluga", "known") # 3. Airbus Families if t.startswith("A38") or "380" in desc: return ("Airbus A380", "known") if t.startswith("A35") or "350" in desc: return ("Airbus A350", "known") if t.startswith("A34") or "340" in desc: return ("Airbus A340", "known") # Algemene A330 check (vangt nu ook A332 op die niet Voyager is) if t.startswith("A33") or "330" in desc: return ("Airbus A330", "known") if (t.startswith("A31") or t.startswith("A32") or t.startswith("A2") or "A32" in desc or "A-32" in desc): return ("Airbus A320-family", "known") if t.startswith("BCS") or "A220" in desc: return ("Airbus A220", "known") # 4. Boeing Families if t.startswith("B73") or t.startswith("B3") or "737" in desc: return ("Boeing 737", "known") if t.startswith("B74") or "747" in desc: return ("Boeing 747", "known") if t.startswith("B77") or "777" in desc: return ("Boeing 777", "known") if t.startswith("B78") or "787" in desc: return ("Boeing 787", "known") if t.startswith("B75") or "757" in desc: return ("Boeing 757", "known") if t.startswith("B76") or "767" in desc: return ("Boeing 767", "known") # 5. Commercial Regional if (t.startswith("E17") or t.startswith("E19") or t.startswith("E2") or "E-JET" in desc or "E170" in desc or "E190" in desc): return ("Embraer E-Jet Family", "known") if t.startswith("DH8") or "DASH 8" in desc or "Q400" in desc: return ("Dash 8 / Q400", "known") if t.startswith("AT") or "ATR" in desc: return ("ATR 42/72", "known") if t.startswith("RJ") or t.startswith("B46") or "AVRO" in desc or "BAE 146" in desc: return ("Avro RJ / BAe 146", "known") if "FOKKER" in desc or t in ["F70", "F100"]: return ("Fokker 70/100", "known") # 6. Business & Recreational if (t in ["E135", "E140", "E145", "ER3", "ER4", "ERJ"] or "ERJ-1" in desc or "ERJ 1" in desc): return ("Business & Recreational", "small") small_codes = { "C25A", "C25B", "C25C", "C525", "C550", "C560", "C680", "C510", "C500", "C501", "C551", "GLF4", "GLF5", "GLF6", "G280", "GALX", "FA50", "FA7X", "FA20", "FA2K", "FA10", "LJ35", "LJ45", "LJ60", "BE20", "BE30", "BE40", "BE9L", "BE10", "PC12", "PC6", "PC24", "CL30", "CL35", "CL60", "GL5T", "H25B", "H25C", "HA4T", "SW4", "P28A", "P28R", "P28T", "PA46", "C172", "C182", "C208", "SR20", "SR22" } small_keywords = ["CITATION", "GULFSTREAM", "FALCON", "LEARJET", "HAWKER", "CHALLENGER", "PHENOM", "LEGACY", "BEECHCRAFT", "PILATUS", "CESSNA", "PIPER", "CIRRUS", "METRO", "KINGAIR"] if t in small_codes or any(k in desc for k in small_keywords): return ("Business & Recreational", "small") # 7. Other Commercial / Classics if "MD11" in desc or t == "MD11": return ("McDonnell Douglas MD-11", "known") if "MD8" in desc or t.startswith("MD8"): return ("McDonnell Douglas MD-80", "known") return ("Other", "other") def log_unknown_type(self, hex_id, t, desc, reg): if Parameters.get("Mode3", "False") != "True": return log_key = f"{t}|{desc}" if log_key in self.logged_unknowns: return self.logged_unknowns.add(log_key) log_file = os.path.join(self.home_dir, "unknown_aircraft_types.log") timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S') log_entry = f"{timestamp} | hex={hex_id} | t={t} | desc={desc} | r={reg}\n" try: with open(log_file, 'a') as f: f.write(log_entry) except: pass def get_cardinal_dir(self, angle): directions = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE", "S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW"] return directions[int((angle + 11.25) / 22.5) % 16] def calculate_distance(self, lat1, lon1, lat2, lon2): R = 6371 dLat, dLon = math.radians(lat2 - lat1), math.radians(lon2 - lon1) a = math.sin(dLat/2)**2 + math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) * math.sin(dLon/2)**2 return R * 2 * math.atan2(math.sqrt(a), math.sqrt(1-a)) def onStart(self): if 1 not in Devices: Domoticz.Device(Name="Tracker", Unit=1, TypeName="Text", Used=1).Create() if 2 not in Devices: Domoticz.Device(Name="Counter", Unit=2, Type=113, Subtype=0, Used=1).Create() else: try: if Devices[2].sValue: self.total_count = int(Devices[2].sValue.split(';')[0]) except: self.total_count = 0 if 3 not in Devices: Domoticz.Device(Name="Types", Unit=3, TypeName="Text", Used=1).Create() Domoticz.Log("Airplane Tracker started.") def onHeartbeat(self): try: if "Location" not in Settings: return loc = Settings["Location"].split(";") my_lat, my_lon = float(loc[0]), float(loc[1]) now_ts, now_dt = time.time(), datetime.now() if now_dt.strftime('%Y-%m-%d') != self.today: self.today = now_dt.strftime('%Y-%m-%d') self.last_seen_ts.clear();

Source: Home Assistant (Blog officiel) It’s been a busy few months composing behind the scenes, building up to a massive crescendo. Today, the beat finally drops on Music Assistant’s biggest update yet. With version 2.7, Music Assistant is getting all jazzed up with a visual overhaul, a chart-topping lineup of new features and providers, along with a brand-new streaming protocol we’re spinning up ourselves. Of course, you can always update and experience all the great new stuff without reading the rest of this, but you might miss a deep cut. In fact, we can’t even cover everything in this blog (there really is that much), so go sing your praises for anything we missed in the ! Table of contents Marvin joins the team A visual overhaul Users and logins Remote music streaming Introducing Sendspin AirPlay additions Lyrics support Smart fading And much more Join the audio revolution “With a Little Help from My Friends” Marvin joins the team Music Assistant has gained its first full-time employee at the Open Home Foundation. No, not me! My day job is leading the Ecosystems department at the foundation (which comprises all the software projects the Foundation has that are not Home Assistant itself). Marvin will be joining the foundation in the new year to work full-time on Music Assistant, leading the project’s day-to-day operations. Marvin has been contributing to the project for three years now, working on all sorts of parts of the project, and specifically with the Apple Music and YouTube providers. Not to worry, I’m pretty obsessed with my audio setup and will still be tinkering on my little pet project . “Everything in Its Right Place” A visual overhaul Music Assistant joining the foundation has given us a lot more than a nice open home; it’s given the project clearer direction and some expert help. One area some people felt Music Assistant fell short was its UI and UX, and in version 2.7, we’re starting the process of giving it a major overhaul, making it look as good as your music sounds! This is just the beginning of a big process, so expect every update to bring more polish. The first thing you’ll probably notice is the collapsible navbar on the left of the screen, which looks pretty familiar to another Assistant . Now it’s much more intuitive, especially for new users. The settings page has also been made much easier to navigate with breadcrumbs. The biggest star of the show is the new Built-in Player, which lets you listen to music on the browser you’re using to hunt for your next track. Great for double-checking if the next song is family-friendly before sending it to every speaker in the home. “Bulletproof” Users and logins A lot of new features we’ve implemented wouldn’t be possible without some form of login and authentication. It was a much-requested feature, as security even within your home shouldn’t be ignored. We know logging in every once in a while can be a minor inconvenience, but we’ve tried to make it as unobtrusive as possible, even implementing a way to use your Home Assistant login as a “Single Sign-On”. You can now have different user profiles with their own music providers. No more having four Tidal accounts all sitting next to each other, cluttering up the Playlists tab. You can even assign who has access to each speaker; say goodbye to the kids playing Demon Hunters on your office speaker during your performance review . In Settings, just head to the User Management section, where you can add and edit your new users. “Around the world” Remote music streaming One feature made possible with our new login interface is remote music streaming – yes, that’s correct, Music Assistant anywhere you can connect to the internet. We’ve created a new web app that allows for remote connections while you’re out and about. It uses Home Assistant Cloud’s built-in multimedia streaming capabilities (WebRTC) to help route the audio from your Music Assistant server to wherever you are. A Home Assistant Cloud subscription is not required to use this feature; a big shoutout to Nabu Casa for providing their infrastructure for free to our users. Home Assistant Cloud subscribers get access to even more powerful routing, which improves streaming in more places. This subscription also supports the full-time development of Music Assistant . This connection is peer-to-peer and end-to-end encrypted, meaning no one will know if you’re listening to ABBA . I wouldn’t say it’s ready to replace your current music streaming service, but it’s a great way to get your FLACs playing at a friend’s house. You could even open two instances of the web app and stream it to two devices, and they’ll be synchronized… but how is that even possible? “Spin me right round” Introducing Sendspin For some time, the Music Assistant team has been looking for the best way to stream audio, album art, and other music visualizations to the devices we have around our homes. There are a couple of projects out there doing cool stuff with streaming audio, but not any that fit our needs. So, when it doesn’t exist, it’s time to start building. Introducing Sendspin, a new multimedia streaming and synchronizing protocol. It’s fully open source and free to use. Sendspin can stream high-fidelity audio, album art, and visualizer data, automatically adapting to each device’s capabilities. Imagine an e-paper display showcasing the album cover, while multiple speakers play in sync, and smart lights pulse to the rhythm. The best way to use it right now is either via your browser or a Home Assistant Voice Preview Edition running beta firmware. We’ve built the experimental ability to use Sendspin on Google Cast-capable speakers (we’re also looking to do the same with AirPlay-capable speakers), which will allow Sendspin to work with a lot of different hardware. A big thanks to Maxim and Kevin at the Open Home Foundation, who have been instrumental in making Sendspin a reality. Even though it can do some impressive stuff today, it’s very much a tech preview, and this announcement is our call to all developers and DIY audio hobbyists – we need your help building and testing this. This is the spec, start building with it! All the best things in life are meant to be shared, and your music should be as free and open as the software we love. So spin that record , drop the needle, and send that music across your entire home. “Aeroplane” AirPlay additions We recently added support for external audio sources, the first being Spotify Connect. This allows you to stream audio from the Spotify app to your Music Assistant server, which could send it across all your speakers, even if they don’t support Spotify Connect. We’ve now added the ability to send AirPlay audio to Music Assistant, which you can then send anywhere in your home. We also now support AirPlay 2 speakers as a player provider, which means perfectly synced audio across all your AirPlay 2-capable speakers, like HomePods. We recommend reading the limitations in the documentation, as not all AirPlay 2 devices are made equal . “Sing” Lyrics support Never again be left guessing what Kurt is saying in Smells Like Teen Spirit. As of Music Assistant 2.6, you can now see the lyrics of the song you’re playing. If the lyrics provider supports it, there is the ability to have these words time-synced, making it more like karaoke. Lyrics can be found when you open the queue menu and it will be in the “lyrics” tab (this tab will only appear if the track name, artist and album are matched to the lyrics providers). We started with support of LRCLIB, but have since added Tidal lyric syncing, Genius lyrics, and local LRC files. “Smooth operator” Smart fading Music Assistant is now your personal in-house DJ, perfectly blending one song into the next, and unlike a DJ it always takes your requests . This latest update adds Smart fading, which takes into account the BPM of each song, to make crossfading between songs sound more natural.

Source: Home Assistant Community Forum (Latest) tl,dr: Here to seek your support for a feature request on improving how HA’s databases work, what goes into them, and how they are maintained. After recently having a major blowout in my SQLite database (reaching 1.8 Gb), it prompted me to sit down and review a number of things related to recorder: and just what data was stored in my HA DB. Doing so sent me further down a rabbit hole, thinking about how HA is setup from the start, what happens when new integrations and devices are added, and the high learning curve new users to the platform have when it comes to understanding what goes into the database and achieving an optimal configuration that doesn’t tax their hardware or storage. Long story short: HA has some really great opportunities in this space to help users optimise the database for performance, size, and reduced load/storage device wear. To this end, I have opened a significant and detailed Feature request, and I’d appreciate your support and/or discussion on it: GitHub Database & Recorder integration: Overhaul UI to permit reduced DB size... Describe your core improvement Prompt at instance setup to configure settings for proper DB maintenance from the outset: As part of the onboarding/instance first time setup process, or at a later s... The feature request above covers all the usual topics, which I hope will assist you providing unqualified support, and discussion that might help drive better outcomes and faster update by the HA core team. I also have a separate blog post that goes deeper into the detail on the topic, especially when it comes to the theory and practice of optimising an existing instance for others with the same dilemma I had. Thanks in advance for your votes and discussions, it will be appreciated. 2 posts - 2 participants Read full topic

Source: Gladys Assistant (Forum) @pierre-gilles , Je rebondis sur ce sujet du coup, j’ai terminé la vérification pour la PR sur l’amélioration du recalcul du suivi de l’énergie. Je pense que tu peux review : github.com/GladysAssistant/Gladys Energy monitoring - New Add date range recalculation for energy monitoring with improved jobs and tests master ← Terdious:energy-recalc-date-and-multi-features 05:16PM - 06 Jan 26 UTC +3203 -428 ### Pull Request check-list To ensure your Pull Request can be accepted as fa…st as possible, make sure to review and check all of these items: - [x] If your changes affects code, did your write the tests?

  • Are tests passing? (npm test on both front/server)
  • Is the linter passing? (npm run eslint on both front/server)
  • Did you run prettier? (npm run prettier on both front/server)
  • If you are adding a new features/services, did you run integration comparator? (npm run compare-translations on front)
  • Did you test this pull request in real life? With real devices? If this development is a big feature or a new service, we recommend that you provide a Docker image to the community (forum) for testing before merging.
  • If your changes modify the API (REST or Node.js), did you modify the API documentation? (Documentation is based on in code)
  • If you are adding a new features/services which needs explanation, did you modify the user documentation? See the GitHub repo and the website.
  • Did you add fake requests data for the demo mode (front/src/config/demo.js) so that the demo website is working without a backend? (if needed) See https://demo.gladysassistant.com. NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open. ### Description of change * Translate * +156 lines / -21 lines * Front * +923 lines / -137 lines * Server * +699 lines / -126 lines * Tests * +1232 lines / -129 lines - Add start/end date support for energy consumption and cost recalculation (full or selected features).
  • Purge recalculated consumption states within the selected period before recomputing.
  • Return job_id on recalculation endpoints and handle errors on the UI side.
  • Display recalculation period in background jobs.

Source: Gladys Assistant (Forum) Salut à tous ! Vous avez sûrement entendu parler d’OpenClaw, le framework IA qui a explosé sur GitHub il y a quelques semaines, et qui vient d’être racheté par OpenAI. J’ai fait le test de connecter OpenClaw à Gladys pour tester les possibilités, et c’est vraiment bluffant Je vous en dis plus en vidéo : J'ai laissé l'IA OpenClaw contrôler ma Maison (C'est fou) Note : Je vous déconseille d’installer OpenClaw sur votre serveur Gladys, c’est un logiciel encore en début de vie, qui touche un peu à tout et qui a été pas mal décrié pour ses failles de sécurité. Pour ce test, j’ai déployé OpenClaw sur une VM dans le Cloud pour rester dans un environnement isolé 5 messages - 3 participant(e)s Lire le sujet en entier

Source: Adafruit Blog Hodzinets shares: USB Solder dispenser is easy to print, no soldering required (ironically). download the files on: https://makerworld.com/en/models/2337723-motorized-usb-solder-dispenser Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has passion and dedication for making solid objects from digital models. Recently, we have noticed electronics projects integrated with 3D printed enclosures, brackets, and sculptures, […]

Source: Gladys Assistant (Forum) Bonjour tout le monde, je découvre Gladys Assistant et c’est très intéressant. Est ce possible d’installer Gladys sur une box internet différente de FreeBox Delta ? Je pense qu’il faut au moins la fx VMs. Merci de vos et à bientôt ! 2 messages - 2 participant(e)s Lire le sujet en entier

Source: openHAB Community (Latest) I’ve been using the Sonos binding for years. I have about 12 amps and I play whole house music from a filesystem of MP3s on my local NAS. My openhab is a 5.1.3 installation on a Debian-based server at my house. I use 3 scripts: A morning script that sets the volume of 10 different amps, joins them to a single “control” amp, and starts playback on that amp. A later-in-the-morning script that increases the volume plus-one-at-a-time on a handful of those amps (the ones closer to my bedroom) An evening script that decreases the volume minus-one-at-a-time on all amps, until they reach 0, then pauses the music, then resets the volumes to sane values for the next day. I have been annoyed for some time, that the Coordinator value isn’t working right. It doesn’t properly reflect the “control” amp. Most of the amps either show NULL, or their own UDN as the value of the Coordinator. I was motivated recently to look at the code, because when I turn on the family room TV, I want the kitchen amp to change sources to family room TV (rather than the “control” amp playing the whole-house music). Having finished messing around with the Blink Binding, I decided to look at the Sonos Binding code. I updated the demo distribution “app” xml to include the Sonos binding (alongside the Blink binding I’ve been developing). I struggled with the stupid dependency refresh buttons for an hour or two (translation: I don’t understand what it’s doing), and I got it working. I use a Mac laptop as my desktop / development environment. openhab calls itself a 5.2.0-SNAPSHOT version. It found the Sonos amps, and I added the Things from the inbox, then created a number of Items. To my surprise, it all works perfectly on the laptop instance of openhab running within eclipse. I tried to copy the .jar file from the Mac to the server, just like I had done successfully with the Blink binding jar file, but it won’t run, it complains about a upnp package missing: 2026-02-25 00:55:12.669 [WARN ] [org.apache.felix.fileinstall ] - Error while starting bundle: file:/usr/share/openhab/addons/org.openhab.binding.sonos-5.2.0-SNAPSHOT.jar org.osgi.framework.BundleException: Could not resolve module: org.openhab.binding.sonos [311] Unresolved requirement: Import-Package: org.openhab.core.config.discovery.upnp at org.eclipse.osgi.container.Module.start(Module.java:463) ~[org.eclipse.osgi-3.18.0.jar:?] at org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:445) ~[org.eclipse.osgi-3.18.0.jar:?] at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1260) [bundleFile:3.7.4] Stymied by not being able to run the 5.2.0 SNAPSHOT version of Sonos on my server, I tried a different approach… Guessing perhaps I had some stale Things in my server-based openhab 5.1.3 instance, I deleted all 66 Sonos Items, all Sonos Things, and uninstalled the Sonos Add-On. I reinstalled the add-on from the marketplace, and proceeded to recreate dozens of Things and Items. It still doesn’t work. The volume Items are all NULL, and the Coordinator values are generally set to each amp’s own UDN. The controls work but the state updates back to the Items don’t: If I use the MediaControl for the “control” amp, I can advance to the next song (good) If I use debian server openhab to modify the volume of one amp, then the Item will show the new volume (good) (and also, the volume does change in the room) If I use the Sonos app to modify the volume of one amp, then the debian server openhab Item does not change (bad), but the Mac/eclipse openhab Item does change (good) So it’s probably not the Sonos Binding. But what is it? Server openhab Sonos Binding is not receiving incoming updates from the Sonos hardware, but Mac openhab Sonos Binding is. Not even sure where to start. The server is on wired ethernet, in the same IP network range as the MacBook’s wifi. They are both on a 192.168.192./18 network. Both the MacBook and the server are in 192.168.213. range and the Sonos amps are in 192.168.215.* range. Again, based on netmask these are the same network, it is a /18, NOT a /24. During openhab startup on the server (but not on the MacBook), there is an ominous message: 2026-02-25 00:46:29.974 [INFO ] [.network.internal.utils.NetworkUtils] - CIDR prefix is smaller than /24 on interface with address 192.168.213.20/18, truncating to /24, some addresses might be lost But I don’t know who’s responsible for that message, so I don’t know the scope of what it affects. Might be relevant, but then again, the MacBook eclipse openhab works just fine on this same network range (and without the INFO log message). Any ideas? Thanks in advance! 4 posts - 2 participants Read full topic

Source: Domoticz (Forum News) Domoticz version: 2025.2 (build 17252) Platform: (Raspbian 13.2) After upgrading to the latest beta version 2025.2 (build 17252), I noticed an issue with the lastUpdate.minutesAgo parameter in DzVents. I am using OWFS 1-Wire temperature sensors. The sensors update correctly — at least once per minute (confirmed in the device log and hardware level). In my DzVents scripts, I check: device.lastUpdate.minutesAgo Since the last update, even though the sensor was updated less than a minute ago, Domoticz shows an increasing minutesAgo value indefinitely for randomly selected sensors. The value keeps increasing as if the device had not been updated at all, while in reality: The device value is changing correctly. The device “Last Update” timestamp in the UI appears correct. Only lastUpdate.minutesAgo in DzVents is incorrect. The issue does not affect all sensors at once — it appears randomly on different OWFS devices. The same scripts worked correctly in previous versions. Statistics: Posted by tomes — Tuesday 24 February 2026 23:19 — Replies 4 — Views 363

Source: Gladys Assistant (Forum) ​Bonjour, ​Nouveau sur Gladys et habitué à bidouiller sur Home Assistant, j’ai voulu migrer pour plus de simplicité. Cependant, je rencontre plusieurs problèmes suite à une mauvaise configuration réseau qui a fait crasher mon serveur Unraid. ​Ma Configuration : ​Hardware : Lenovo ThinkCentre M710q. ​Stockage : DAS TerraMaster 4 baies (USB). ​Dongles : Zigbee (sur /dev/ttyACM0) et Bluetooth Asus BT500. ​Gladys : Installée via Docker sur Unraid (actuellement en mode Bridge sur le port 8006 pour éviter les conflits). ​Le problème : Suite à une tentative de configuration du dongle Bluetooth, j’ai passé Gladys en mode réseau Host, ce qui a provoqué un crash total d’Unraid (conflit d’IP/Ports) et une corruption de ma clé de boot. J’ai dû recréer une « New Config » et supprimer l’image Docker corrompue. ​Depuis, j’ai deux soucis majeurs : ​Services MQTT et Zigbee : Les containers intégrés n’existent plus. J’ai tenté une réinstallation manuelle via l’onglet Apps d’Unraid, mais Gladys ne semble plus communiquer avec eux. ​Y a-t-il un moyen de forcer Gladys à réinstaller et gérer elle-même ces containers (Mosquitto/Zigbee2MQTT) pour retrouver le fonctionnement natif ? ​ m’assurer que les nouveaux containers pointent correctement vers mes anciennes données dans /mnt/user/appdata/ ? ​Bluetooth (Asus BT500) : Mon dongle n’est toujours pas reconnu dans l’interface Gladys (« Pas de dispositif trouvé »). J’ai pourtant activé le mode Privileged et tenté un passthrough USB via /dev/bus/usb, mais sans succès. ​Avez-vous une astuce pour stabiliser le Bluetooth sur Unraid et lier correctement les services MQTT/Zigbee après un tel crash ? ​Merci d’avance pour votre aide ! Cordialement, Didier 5 messages - 3 participant(e)s Lire le sujet en entier

Source: openHAB Community (Latest) Hi all, I have several H2EU Rollershutter devices that I operate in Matter mode. The devices have been correctly added to OpenHAB via Matter. I can see their state. If I close them manually via the hardware button, the state updates fine in OpenHAB to a value between 0 and 100 (where 100 is fully closed). The weird thing happens, when I try to send commands to the device: Sending “100” as command closes the rollershutter exactly 1%. After the blinds stop moving, state in OpenHAB shows as “1”. Sending any other number between 0 and 99 drives the blinds to fully open state. Consequently it seems as if the value is somehow divided by 100 before it is sent to the device. I tried multiplying my value by 100 (e.g. sending 5000), but anything >100 is ignored completely. Log, when sending “100”: 2026-02-23 17:48:45.261 [INFO ] [openhab.event.ItemCommandEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' received command 100 (source: org.openhab.ui=>org.openhab.core.io.rest$meph) 2026-02-23 17:48:45.261 [INFO ] [penhab.event.ItemStatePredictedEvent] - Item 'Rollo_WZ_links_Window_Covering_Lift' predicted to become 100 2026-02-23 17:48:45.262 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 0 to 100 (source: org.openhab.core.autoupdate.optimistic) 2026-02-23 17:48:46.445 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 0 W to 97 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:48:46.461 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 100 to 1 (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:1#windowcovering-lift) 2026-02-23 17:48:47.449 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 97 W to 0 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) Log when sending “50” (after “100” was sent and blinds are 1% closed): 2026-02-23 17:48:58.093 [INFO ] [openhab.event.ItemCommandEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' received command 50 (source: org.openhab.ui=>org.openhab.core.io.rest$meph) 2026-02-23 17:48:58.095 [INFO ] [penhab.event.ItemStatePredictedEvent] - Item 'Rollo_WZ_links_Window_Covering_Lift' predicted to become 50 2026-02-23 17:48:58.096 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 1 to 50 (source: org.openhab.core.autoupdate.optimistic) 2026-02-23 17:48:59.261 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 0 W to 3 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:49:00.280 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 3 W to 0 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:49:01.320 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 50 to 0 (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:1#windowcovering-lift) I also already tried applying a “|(input*100)” transformation to the channel. It did not really change a thing (other than 0 and 1 being the only “valid” numbers to send - still not able to lower the blinds below 1%). Problem does not seem to be about the hardware of the device itself (I thought that first), since a parallel HomeAssistant installation can operate the rollershutter fine. OpenHAB is on version 5.1.1 (running within docker container) Firmware on all devices is upgraded to the current version. I am really running out of ideas what else I can try. If anybody would have any ideas about this, it would be greatly appreciated. 5 posts - 2 participants Read full topic

Source: Gladys Assistant (Forum) Bonjour à tous, Je viens de publier une nouvelle version de Gladys, avec plusieurs améliorations et corrections Nouveautés Intégration Matterbridge : Gladys vous permet maintenant de lancer un container Matterbridge en un clic dans Gladys, ouvrant la porte à encore plus d’appareils compatibles. Si vous voulez savoir j’ai développé cette intégration avec l’IA, je vous en dis plus sur YouTube : Claude AI a codé mon intégration domotique en 30 min (j'ai rien fait) La documentation : Matterbridge | Gladys Assistant Attention, si vous avez déjà lancé Matterbridge sur votre instance, cela peut entraîner un conflit de port. Il n’y a pas de réel intérêt à vouloir lancer Matterbridge via Gladys si vous avez déjà pu le lancer vous-même en dehors de Gladys Favoris pour les intégrations : Vous pouvez désormais marquer vos intégrations préférées en favoris pour les retrouver plus facilement. Tasmota : Découverte automatique de l’IP via MQTT, lien direct vers l’interface de l’appareil, et un meilleur tri lors de la découverte. Corrections et améliorations Widget température dans les pièces : Les valeurs aberrantes de température sont désormais exclues, et la conversion en Fahrenheit pour la valeur maximale est corrigée. Chat : Les espaces dans les messages sont maintenant correctement préservés grâce au pre-wrap. Exemple: DuckDB mis à jour en version 1.4.4. Correction de fautes de frappe dans les traductions. Amélioration de la robustesse du service MCP. Merci à tous les contributeurs : @bertrandda, @mutmut @Will_71 @qleg et @Terdious pour ce beau travail collaboratif ! Bonne mise à jour ! 12 messages - 5 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hello. I have a problem with my Ecodevice (First génération) provided by GCE Electronics. (2 inputs for téléinfo one and two and 2 others inputs for 2 counters) It works perfectly when I connect only teleinfo1. All devices are created: Teleinfo 1 Alerte courant,Teleinfo 1 Tarif en cours,Teleinfo 1 Pourcentage de Charge,Teleinfo 1 Courant, Teleinfo 1 kWh Total (It's also OK for the 2 counters C1 and C2) But when I connect the second signal téléinfo2, devices are also created, except 1 (the most important) : Teleinfo 2 kWh. The datas for téleinfo1 are briefly displayed on the sensor Teleinfo 1 kWh but immediatly replaced with Teleinfo2's datas. On the log, I have twice the line for Teleinfo 2 kWh Total (2 differents lines for others sensors like teleinfo courant) 2026-02-23 12:55:47.050 ECO-DEVICES: Fetching Teleinfo 1 data 2026-02-23 12:55:47.152 ECO-DEVICES: Fetching Teleinfo 2 data 2026-02-23 12:55:47.152 ECO-DEVICES: P1 Smart Meter (Teleinfo 2 kWh Total) 2026-02-23 12:55:47.257 ECO-DEVICES: P1 Smart Meter (Teleinfo 2 kWh Total) ... 2026-02-23 12:55:47.259 ECO-DEVICES: Current (Teleinfo 2 Courant) 2026-02-23 12:56:47.754 ECO-DEVICES: Current (Teleinfo 1 Courant) ... Iknow that Ecodevice (first generation) is an old system. I know also that this system is only used in France ,and for almost everyone only one teleinfo signal. And perhaps the problem comes from GCE Electronics. So I'm not sure someone can help me… But I still hope... Patrick Statistics: Posted by Pchatill — Monday 23 February 2026 13:06 — Replies 1 — Views 212

Source: Gladys Assistant (Forum) Bonjour, Comme promis, voici un tutoriel afin de connecter une clé SMLIGHT à Gladys via le réseau et non via USB car sinon on perd tout l’intérêt de cette clé en la branchant en USB Gladys ne prenant en charge que le mode USB @pierre-gilles arrête moi si je me trompe , je suis partie sur une installation avec un MQTT et Z2M externe à Gladys. @pierre-gilles ? Ici pour z2m en https j’utilise le port 443 mais pour éviter tout conflit si vous décider de tout installer sur la même machine il faudrait modifier par le port 4343 par exemple J’avais déjà fais un tuto complet sur HAOS à l’époque ou je l’utilisais que vous pouvez retrouver ici : Home Assistant Communauté Francophone – 29 May 25 [Tuto] Installer HAOS sur Proxmox avec Z2M et MQTT (Full SSL/TLS - Lets Encrypt) Home Assistant - Tutoriels & Partages Installation tutoriel L’idée de ce tutoriel est de regrouper un peu toutes les infos que j’ai pu trouver pour l’installation de HAOS, Z2M et MQTT sur des VM séparés au sein de Proxmox et le tout en full SSL/TLS (Lets encrypt) sur ce type de machine : ... Temps de lecture: 7 mins J'aime: 15 Installation Mosquitto (Mqtt) Installer une VM sous ubuntu 24.04, mettez la complétement à jour et lancez les commandes suivantes : Pour installer docker : curl -sSL https://get.docker.com/ | CHANNEL=stable sh systemctl enable --now docker Ajouter un utilisateur docker_mosquitto par exemple : adduser docker_mosquitto Récuperer son ID : (Ici 1002) cat /etc/passwd [ grep docker_mosquitto Créer le dossier mosquitto dans /opt mkdir /opt/mosquitto Créer le fichier docker-compose.yml avec le contenu suivant : (Remplacer 1002 par les ID que l’on a récupéré juste avant) services: mosquitto: image: eclipse-mosquitto:2.0.22 container_name: mosquitto restart: unless-stopped user: "1002:1002" ports: - "1883:1883" # MQTT - "8883:8883" # MQTTS (secure) - "9001:9001" # WebSockets volumes: - ./mosquitto/config:/mosquitto/config - ./mosquitto/data:/mosquitto/data - ./mosquitto/log:/mosquitto/log - /etc/localtime:/etc/localtime:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem:ro Créer un dossier mosquitto et les sous dossiers et appliquer les droits : (Celui-ci contiendra la configuration les datas et les logs) mkdir /opt/mosquitto/mosquitto mkdir /opt/mosquitto/mosquitto/data mkdir /opt/mosquitto/mosquitto/config mkdir /opt/mosquitto/mosquitto/log touch mkdir /opt/mosquitto/mosquitto/log/mosquitto.log chown -R 1002:1002 /opt/mosquitto/mosquitto Créer le fichier de config dans /opt/mosquitto/mosquitto/config/mosquitto.conf persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 localhost allow_anonymous false #password_file /mosquitto/config/passwd tls_version tlsv1.3 listener 8883 certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem listener 9001 protocol websockets certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem Activez le SSL : (A adapter en fonction du plugin que vous utilisez pour récupérer le certificat de votre nom de domaine) Ici il s’agit d’un exemple avec infomaniak apt install certbot apt install python3-pip pip install certbot-dns-infomaniak export INFOMANIAK_API_TOKEN=xxx certbot certonly \ --authenticator dns-infomaniak \ --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos \ --rsa-key-size 4096 \ -d 'mqtt.xxx.local.srv-home.fr' Par défaut, certbot installe un service qui renouvelle périodiquement ses certificats automatiquement. Pour ce faire, la commande doit connaître la clé API, sinon elle échouera silencieusement. Afin d’activer le renouvellement automatique de vos certificats génériques, vous devrez modifier /lib/systemd/system/certbot.service. Ajoutez-y la ligne suivante dans Service, en remplaçant par votre jeton : Environment="INFOMANIAK_API_TOKEN=" Ensuite ouvrez le fichier de config nano /etc/letsencrypt/renewal/xxx.conf Ajouter renew_hook = docker restart mosquitto chmod -R 755 /etc/letsencrypt/live chmod -R 755 /etc/letsencrypt/archive Lancer le container cd /opt/mosquitto docker compose up -d Vous pouvez voir les logs du container docker : docker logs mosquitto -f Activer l’authentification (Remplacer username par un utilisateur, par exemple « mqttuser » docker exec -it mosquitto mosquitto_passwd -c /mosquitto/config/passwd username Décommenter la ligne « password_file /mosquitto/config/passwd » dans le fichier « /opt/mosquitto/mosquitto/config/mosquitto.conf » ce qui donner maintenant persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 localhost allow_anonymous false password_file /mosquitto/config/passwd tls_version tlsv1.3 listener 8883 certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem listener 9001 protocol websockets certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem Relancer ensuite le container : docker restart mosquitto Vous pourrez ensuite renseigner ce login dans Gladys et juste après dans zigbee2mqtt Installation Zigbee2mqtt Installer une VM sous ubuntu 24.04, mettez la complétement à jour et suivre la procédure suivante : Linux Docker | Zigbee2MQTT Pour installer docker : curl -sSL https://get.docker.com/ | CHANNEL=stable sh systemctl enable --now docker Voici mon fichier docker-compose.yml : services: zigbee2mqtt: container_name: zigbee2mqtt image: koenkk/zigbee2mqtt:2.8.0 restart: unless-stopped volumes: - ./data:/app/data - /run/udev:/run/udev:ro - /etc/localtime:/etc/localtime:ro - /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem:/etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem:ro - /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem:/etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem:ro

devices:

- /dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0:/dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0 ports: - "443:443" # Port externe 443 → port interne 443 environment: - TZ=Europe/Paris networks: - z2m_net networks: z2m_net: driver: bridge Voici mon fichier /opt/z2m/data/configuration.yaml à titre d’exemple, il faut modifier le auth_token qui vous permettra de vous connecter à l’interface web ainsi que le mot de passe de l’utilisateur z2m que l’on a mis précédemment lors de l’installation de MQTT

homeassistant: enabled: false mqtt: base_topic: zigbee2mqtt server: mqtts://mqtt.xxx.local.srv-home.fr:8883 user: mqttuser password: achanger keepalive: 60 reject_unauthorized: true version: 4 include_device_information: true serial: port: tcp://192.168.xx.xx:7638 baudrate: 460800 adapter: zstack disable_led: false advanced: pan_id: GENERATE network_key: GENERATE channel: 25 homeassistant_legacy_entity_attributes: false legacy_api: false legacy_availability_payload: false log_level: info log_syslog: app_name: Zigbee2MQTT eol: /n host: localhost localhost: localhost path: /dev/log pid: process.pid port: 514 protocol: udp4 type: '5424' last_seen: ISO_8601 frontend: enabled: true package: zigbee2mqtt-windfront port: 443 host: 0.0.0.0 url: https://z2m.xxx.local.srv-home.fr ssl_cert:

Source: Domoticz (Forum News) Hi all, This morning my Docker container was updated to latest beta (17204). There seems to be a problem with a Python library if I interpret the log correctly: 2026-02-21 13:32:06.430 Status: Domoticz V2025.2 (build 17204) (c)2012-2026 GizMoCuz 2026-02-21 13:32:06.430 Status: Build Hash: 71a61ff22, Date: 2026-02-20 11:33:19 2026-02-21 13:32:06.431 Status: Startup Path: /opt/domoticz/ 2026-02-21 13:32:06.511 Status: PluginSystem: Failed dynamic library load, install the latest libpython3.x library that is available for your platform. 2026-02-21 13:32:06.513 Status: PluginSystem: 'ConBee2' Registration ignored, Plugins are not enabled. 2026-02-21 13:32:06.513 Status: PluginSystem: 'SolarEdge' Registration ignored, Plugins are not enabled. 2026-02-21 13:32:06.513 Status: PluginSystem: 'Shelly MQTT' Registration ignored, Plugins are not enabled. Should not be related (because I use Docker) but I am running Docker on a headless Debia Trixie NUC. I am posting this in the hope that this will be corrected in the next beta. I have no desire to go fiddle inside the container Statistics: Posted by Sjonnie2017 — Saturday 21 February 2026 13:40 — Replies 2 — Views 290

Source: Domoticz (Forum News) After a update from beta 17099 to 17189 Domoticz did not restart. Did the update throuh the settingsscreen. Counter counted until 100, then the message update failed, no internet connection.... After a domoticz service restart was domoticz running normaal on the downloaded beta 17189. I thought this problem was solved? Attached the update.log and domoticz crash log domoticz_crash.log update.log Statistics: Posted by Rik60 — Sunday 15 February 2026 20:21 — Replies 5 — Views 484

Source: Domoticz (Forum News) Since yesterday I'm getting the following error on the buienradar integration: Code: Error: Internet weer: Invalid data received (station measurement empty), or no data returned! Started just somewhere during the day. Updated this morning to latest beta, did not resolve the issue. Any clue to the root cause? API change? Statistics: Posted by JanJaap — Tuesday 10 February 2026 11:45 — Replies 2 — Views 167

Source: Domoticz (Forum News) System: Raspberry Pi ZeroW, Raspbian Bookworm Lite. It is a new, headless installation, no hardware attached. SSH is functional, but i am not sure if the port 8080 or 443 are opened. wget 192.162.1.82:8080 is not answering. Code: sudo service domoticz.sh statusdomoticz.service - LSB: Home Automation System Loaded: loaded (/etc/init.d/domoticz.sh; generated) Active: active (exited) since Sat 2026-02-07 19:54:02 CET; 7min ago Docs: man:systemd-sysv-generator(8) Process: 2666 ExecStart=/etc/init.d/domoticz.sh start (code=exited, status=0/SUCCESS) CPU: 331msFeb 07 19:54:00 haz11 systemd[1]: Starting domoticz.service - LSB: Home Automation System...Feb 07 19:54:01 haz11 domoticz.sh[2666]: Time synchronized, starting Domoticz...Feb 07 19:54:02 haz11 domoticz.sh[2666]: Illegal instruction <------------------!!!!Feb 07 19:54:02 haz11 systemd[1]: Started domoticz.service - LSB: Home Automation System. It seems to be a critical line: Code: Feb 07 19:54:02 haz11 domoticz.sh[2666]: Illegal instruction: What is the problem there, and what is the solution? Is the Pi Zero suitable for running domoticz? If the current os or domoticz version is not suitable for the Pi Zero, which is the latest usable version? Greetings to everyone and have nice Weekend Statistics: Posted by rabbit — Saturday 07 February 2026 20:36 — Replies 26 — Views 801

Source: Domoticz (Forum News) I used "Open Hardware Monitor" for a long time with a Domoticz "Motherboard" hardware entry. With my new PC "Open Hardware Monitor" doesn't support many of the motherboards sensors. Some older Domoticz release notes are showing that "Libre Hardware Monitor" is supported. However, if I try to use it like "Open Hardware Monitor" it seems not to be recogniced by Domoticz. Further more I couldn't find any how to hints. Can anyone give advise? Addition: Domoticz Log shows "Warning, neither Libre Hardware Monitor nor Open Hardware Monitor are installed on this system." However, Libre Hardware Monitor is running: Domoticz 2025.2 is running on Windows 11 as Libre Hardware Monitor 0.9.5 Statistics: Posted by Itschi — Friday 06 February 2026 8:51 — Replies 1 — Views 206

Source: Domoticz (Forum News) Version: 2025.2 Platform: Pi4 Plugin/Hardware: RFXxl Description: E-mail notifications with a picture (security cam) do not work anymore. Errormessage is: 2026-02-03 17:16:09.188 Error: SMTP Mailer: Error sending Email to: ! 2026-02-03 17:16:09.188 Error: libcurl: (55) 2026-02-03 17:16:09.188 Error: Send failure: Connection reset by peer settings screen attached error messages and test e-mail are received ok. Until recently the mail with a snapshot of the security cam where received. the snapshot works in the camera settings menu settings screen attached Statistics: Posted by Verdwaald — Tuesday 03 February 2026 22:15 — Replies 2 — Views 192

26. EN v2.0.2

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.2 — Advanced Energy Reporting & Charting System Unleashed! We are excited to announce the stable release of HomeGenie 2.0.2, bringing powerful new capabilities for data visualization and energy management to your programmable intelligence platform. This release focuses on transforming raw system data into clear, actionable insights directly within your HomeGenie dashboard. Key Highlights & New Capabilities: Advanced Charting Widget & UI: Multi-Dataset Visualization: The Chart component now supports rendering multiple datasets simultaneously, allowing for mixed bar and line chart types to represent diverse data. Intuitive Historical Navigation: Navigate through your historical data with ease using new Year and Day selectors, complete with convenient Prev/Next navigation buttons. Dynamic Labels: X-axis labels now dynamically adapt based on the selected date range, enhancing data clarity and readability. Optimized Performance: Refactored to employ a "One Worker per Dataset" logic for modular and parallel data fetching, ensuring responsive performance even with large historical logs. UI/UX Improvements: Enhanced Chart.js integration provides theme-aware colors and opacity (supporting glassmorphism effects), alongside fixes for update synchronization and layout shifts. Daily Energy Reporting System (Backend): New Automation Program: Introduced the Daily Energy Report program, designed to automatically aggregate and log Meter.Watts.Hour data from your devices. YAML Persistence: Implemented robust data storage using daily YAML files (e.g., YYYY_DDD_daily_stats.yaml) for reliable historical tracking of energy consumption. Dedicated APIs: Exposed Statistics.Providers/DailyEnergyReport for dynamic widget configuration, enabling seamless integration of daily data. Added DataProcessing.Statistics/DailyEnergyReport APIs for efficient fetching of raw dataset values. Flexible Data Retrieval: Implemented logic to retrieve specific energy data for chosen years and days via new API parameters. Device Integration: Added the Include in Daily Wh report feature toggle, allowing users to easily select which devices contribute to the energy reports. Why HomeGenie 2.0.2? This release further solidifies HomeGenie's commitment to empowering you with local-first, cloud-independent, and intelligent programmable systems. Gain unparalleled insights into your energy consumption and system behavior, enabling smarter automations and a more efficient environment—all managed privately on your own hardware. Download the latest stable build of HomeGenie 2.0.2 from our repository today! Happy Automating! Full Changelog: v2.0.1...v2.0.2

27. EN v2.0.1

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.1 — Your Local AI-Powered Programmable Intelligence Unleashed! We are thrilled to announce the official stable release of HomeGenie 2.0.1, culminating over three years of dedicated development into a completely re-imagined platform. HomeGenie has evolved into a robust, local-first, and privacy-centric system of programmable intelligence, with Agentic AI at its core. This release empowers you with cutting-edge capabilities to transform any environment into a truly intelligent and autonomous system. Key Highlights & New Capabilities: Local AI & Lailama Agentic Engine: Intelligent & Adaptive: The Lailama engine dynamically optimizes its parameters (Context Window, Batch Size) based on your system's available RAM, ensuring stable and efficient operation across diverse hardware. Granular AI Control: A brand-new, intuitive configuration UI (supporting both Light and Dark themes) allows you to fine-tune Lailama's behavior: Adjust Creativity (Temperature) from precise logic to creative responses. Manage Working Memory (Context Window) for enhanced AI recall. Control System Context Sharing to feed real-time module and sensor data to the AI for highly accurate agentic actions. Enhanced Reasoning: Improved System Report formatting and refined system prompts for Lailama and Gemini providers lead to smarter intent recognition and execution. AI Vision Suite: Full integration of YOLO (Object Detection, Instance Segmentation, Pose Estimation) directly on server and ESP32-CAM modules. Agentic Scheduling (Genie Command): The Scheduler now hosts AI-driven tasks, allowing natural language commands to define complex automations autonomously. Speech Recognition & Synthesis: Improved microphone input and voice responses in the new AI chat interface. Async Model Downloads: Robust Download Manager for GGUF models with pause/resume support. Developer API & Framework: New Licensing Model: Re-licensed to GNU Affero General Public License v3.0 for a protected open-source ecosystem. Extended Widget Capabilities: zuix.d.ts is updated with new widget controller methods: this.apiCall(), this.showSettings(), and this.translate() for deeper integration. Universal Fluent API Generator: Generates ready-to-use C#, JavaScript, and Python code with a unified syntax for module interaction. ModuleField API: Added the .decimalValue property to ModuleField for simplified numeric handling in UI logic. User Interface (UI) & Experience (UX) Overhaul: Modernized UI: A sleek, responsive, and multilingual interface with full support for Light/Dark themes and enhanced readability. Redesigned AI Chat: "Bottom sheet" style chat with explicit thought processes, smart scroll, token buffering, and unified "Stop" commands. Customizable Dashboards: New preferences for custom wallpapers (including animated GIFs), widget card colors, opacity, and blur effects. Quick Control Sheets: Implemented Floating Action Buttons (FABs) for rapid control of scenes, lights, colors, and shutters directly from the dashboard. Smart Display Integration: ESP32/ESP32-S3 devices with touch displays now function as customizable and autonomous control centers. Revamped Log Events Viewer: Interactive chart preview with seamless navigation for efficient log analysis. Code Editor Minimap: Enabled for faster navigation within long scripts. Clearer Visual Programming: The "VPL" entry has been renamed to "Visual Program" for improved clarity and accessibility. Why HomeGenie 2.0.1? This release represents our unwavering commitment to empowering users with local-first, cloud-independent, and intelligent programmable systems. With Lailama, your HomeGenie server transforms into a truly autonomous agent, capable of understanding, reasoning, and acting on your unique environment or project—all while keeping your data private and secure on your hardware. Happy Automating! Full Changelog: v2.0.0-rc.15...v2.0.1

Source: HomeGenie (GitHub Releases) HomeGenie Server v2.0.0-rc.15 - The Era of Local Agentic AI This release introduces significant advancements in local AI integration and a refined developer experience for building integrated widgets and automation programs. Local AI & Lailama Enhancements The Lailama engine is now more intelligent, highly customizable, and fully context-aware. Dynamic Memory Optimization: The engine now automatically optimizes model parameters (Context Window and Batch Size) based on the system's available RAM, ensuring stability across different hardware profiles. Refined Configuration UI: A new, intuitive settings panel (supporting both Light and Dark themes) for granular AI control: Creativity (Temperature): Real-time adjustment from precise logic to creative responses. Working Memory (Context Window): Manage how much information the AI can process simultaneously. System Context Sharing: Toggle to provide the AI with real-time status of modules and sensors for precise agentic actions. Improved Context Manager: Enhanced formatting for the System Report to reduce hallucinations and improve intent-to-API mapping. System Prompt Tuning: Refined default prompts for both Lailama and Gemini providers to improve reasoning consistency. Developer API & Framework (zuix.js) We have extended the widget controller capabilities to allow deeper integration with the HomeGenie core. New Controller Methods: Updated zuix.d.ts with built-in methods: apiCall: Directly invoke HomeGenie backend services and program APIs from a widget. showSettings: Programmatically open the widget's configuration interface. translate: Easily handle localized labels using the internal i18n engine. ModuleField API: Added the .decimalValue property to the ModuleField object for simplified numeric handling in the UI layer. UI & Editor Improvements Code Editor: Enabled the minimap for faster navigation within long scripts and programs. Chat UI: Fixed vertical message alignment to the bottom and optimized rendering during token streaming to eliminate Markdown flickering. File Editor Fix: Resolved a layout issue where scrollable content could overlap dialog action buttons. Visual Programming: Renamed the "VPL" entry to "Visual Program" for better clarity and accessibility. Happy Automating! Full Changelog: v2.0.0-rc.14...v2.0.0-rc.15

Source: Home Assistant (Blog officiel) Happy New Year! I hope you had a wonderful holiday, spending time with your loved ones. We’re kicking off 2026 with a smaller release, as our contributors and maintainers have been enjoying some well-deserved time off as well. But don’t worry, there’s still plenty of good stuff in this release! Home Assistant 2026.1 brings a refreshed Home dashboard experience on mobile, with summary cards right at your fingertips without extra taps. We’ve also made it easier than ever to navigate to the protocol connecting your devices, such as Zigbee, Z-Wave, Thread and more. For automation enthusiasts, we’re continuing our work on our even more “human-friendly” triggers, which can be enabled via Home Assistant Labs, so you can build automations using easy-to-understand language instead of technical state changes, like initiating automations if a button is pressed or someone arrives home. On the integrations front, we welcome eight new integrations to the family, including pet tracking with Fressnapf, energy monitoring with eGauge, and smart heating control with Watts Vision +. Plus, improvements to existing integrations from our amazing community contributors. I wish you a happy and healthy 2026! Enjoy the release! ../Frenck Home dashboard improvements Streamlined mobile navigation New devices page Purpose-specific triggers and conditions progress Easier navigation to protocol dashboards Integrations New integrations Noteworthy improvements to existing integrations Integration quality scale achievements Now available to set up from the UI Other noteworthy changes Energy dashboard date picker ESPHome action responses Patch releases 2026.1.1 - January 12 2026.1.2 - January 16 2026.1.3 - January 23 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @bramkragten, @piitaya, and @abmantis who helped write the release notes for this release. Home dashboard improvements The Home dashboard continues to evolve! In the previous release, we introduced a brand-new sidebar layout, weather tiles, and energy distribution summaries. This release takes it even further with a streamlined mobile experience and better device management. Streamlined mobile navigation On mobile devices, the Home dashboard now displays summary cards (like lights, climate, security, media players, weather, and energy) directly at the top of the view, followed by your favorites and areas. This replaces the previous tab-based navigation, giving you instant access to everything that matters without any extra taps. The desktop experience remains unchanged, with summaries displayed in the sidebar under the For you heading. New devices page Ever wondered where your devices went after you removed them from an area? A new Devices page now appears on the Home dashboard, showing all devices that aren’t currently assigned to a specific area. This makes it easy to find and control those “orphaned” devices without hunting through the settings. The new Devices page shows devices not assigned to any area. Purpose-specific triggers and conditions progress In the previous release, we introduced purpose-specific triggers and conditions. Instead of thinking in technical state changes, you can now simply pick things like “When a light turns on” or “If the climate is heating” when building your automations. This feature is still being refined in Home Assistant Labs, but this release adds a lot more trigger types, making this new approach even more useful. Here is an overview of all the new triggers added in this release: Button triggers fire when a button entity has been pressed. Climate triggers now cover all common scenarios. You can trigger on HVAC mode changes, target temperature changes, or when the target temperature crosses a threshold. There are also triggers for current temperature and humidity changes, and even target humidity changes. Device tracker triggers let you automate based on when a device entered or left home, with support for the first device arriving, last device leaving, or any change. Don’t worry, person-specific triggers are coming soon, the device tracker ones were simply available sooner. Humidifier triggers will fire when a humidifier turns on or off, starts humidifying, or starts drying. You can also trigger on humidity changes or when humidity crosses a threshold. Light triggers let you automate based on brightness changes or when brightness crosses a specific threshold. Lock triggers can now fire when a lock is locked, unlocked, opened, or jammed. Scene triggers fire when a scene is activated. Siren triggers fire when sirens are turned on or off. Update trigger fires when an update becomes available. As the new purpose-specific triggers and conditions all support targeting something bigger than a simple entity (an area, a floor, or even a label), we also redesigned how the target gets displayed on the automation flow. The goal of this change is to allow you to quickly glance at your automation, and understand its purpose. Head over to Settings > System > Labs to enable purpose-specific triggers and conditions and give them a try! Easier navigation to protocol dashboards For an organization that loves the open standards that seamlessly connect our devices, we sure didn’t promote them enough! Most people didn’t even know that Home Assistant has dedicated dashboards for protocols like Zigbee, Z-Wave, and more. This release reorganizes the Settings page to give these open protocols a more prominent spot. The protocols section now appears right after the core settings, making it much easier to find all the different ways you’re connecting your devices and quickly access some very useful protocol-specific configurations. The menu items only appear when you have the corresponding integration set up, so you’ll only see what’s relevant to your setup. Integrations Thanks to our community for keeping pace with the new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] and improvements to existing ones! You’re all awesome New integrations We welcome the following new integrations in this release: AirPatrol, added by @antondalgren eGauge, added by @neggert Fluss+, added by @Marcello17 Fish Audio, added by @noambav Fressnapf Tracker, added by @eifinger HomeLink, added by @ryanjones-gentex Watts Vision +, added by @theobld-ww WebRTC, added by @balloob This release also has new virtual integrations. Virtual integrations are stubs that are handled by other (existing) integrations to help with findability. These ones are new: Levoit, provided by VeSync, added by @timmo001 Noteworthy improvements to existing integrations It is not just new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have been added; existing ones are also being constantly improved. Here are some of the noteworthy changes to existing integrations: The Matter integration gained three new diagnostic binary sensors for thermostat remote sensing status from @lboue, helping you keep an eye on your climate system. @joostlek added lots of new sensors to the SmartThings integration, including air quality sensors for PM1, PM2.5, and PM10, hood filter usage tracking, fridge temperature sensors for One Door refrigerators, and fan speed control for range hoods. Roborock owners with Q7 devices can now integrate them thanks to @Lash-L, who added basic read-only support with sensors for battery, status, and cleaning data. @mib1185 improved the FRITZ!SmartHome integration by adding switch entities that let you enable or disable FRITZ! Smart Home routines (triggers) directly from Home Assistant. The Ping integration now tracks packet loss, thanks to @mib1185. The new sensor shows packet loss as a percentage and is disabled by default.

Source: Home Assistant (Blog officiel) Home Assistant 2025.11! November is here, and we’ve been hard at work refining some of the main experiences that you interact with every day, and I think you’re going to love what we’ve built. My personal favorite this release? The brand new target picker. It’s one of those changes that seems simple on the surface, but makes such a huge difference in how you build automations. You can finally see exactly what you’re targeting, with full context about which device an entity belongs to and which area it’s in. No more guessing whether you’re controlling the right ceiling light when you have three of them! But that’s just the beginning. We’re continuing with the automation editor improvements, this time with a completely redesigned dialog for adding triggers, conditions, and actions. It’s cleaner, easier to read, and sets the foundation for some really exciting stuff coming in future releases. And speaking of making things clearer, you can now control exactly how entity names appear on your dashboard cards. Want to show just the entity name? The device name? The area? Or combine them? Even if you rename things, your dashboards will stay perfectly in sync. No more manual updates needed! Oh, and energy dashboard fans will appreciate the new pie chart view for device energy, complete with totals displayed in the corner of every energy card. Enjoy the release! ../Frenck PS: Oh, and pssst… Don’t tell anyone , but there might be something exciting being released on November 19th. Hit the bell on this announced YouTube stream to not miss it. Stay tuned! A brand new target picker A brand new way to add triggers, conditions, and actions in your automations Naming entities on your dashboard Energy pie Progress for Home Assistant and Add-on updates Integrations New integrations Noteworthy improvements to existing integrations Now available to set up from the UI Integration quality scale achievements Farewell to the following Other noteworthy changes Improved logging efficiency The new Home Dashboard keeps getting smarter Patch releases 2025.11.1 - November 7 2025.11.2 - November 14 2025.11.3 - November 21 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @bramkragten, @JLo, @MindFreeze, @agners, and @piitaya who helped write the release notes this release. Also, @silamon and @GemPolisher for putting effort into tweaking its contents. Thanks to them, these release notes are in great shape. A brand new target picker Have you ever been building an automation and wondered, “Wait, which ceiling light is this?” when you see three entities all named “Ceiling light”? Or tried to figure out how many lights you’re actually controlling when you target an entire floor or area? We’ve all been there. Until now, the target picker didn’t show you the full picture. You couldn’t see which device an entity belonged to or which area it was assigned to. And when you selected a floor or area as your target, you had no idea how many entities you were actually affecting. This uncertainty meant many of you stuck with targeting individual entities, even though larger targets (like areas and floors) can make your automations much more flexible. The new target picker changes all that. Now you get full context for everything you’re targeting, and you can see exactly how many entities will be affected by your action. Want to dig deeper? You can expand any floor, area, or device to see exactly which entities are included and where they’re coming from. This makes it so much easier to build automations that scale with your home. When you target an area or floor, your automation automatically adapts as you add or remove devices. No more updating your automations every time you add a new light or sensor. Your automations just work, which is exactly how it should be. A brand new way to add triggers, conditions, and actions in your automations It’s no secret that we’re currently working hard on making automations easier to create. After the release of the automation sidebar two releases ago, we are now introducing a new dialog to add triggers, conditions, and actions. The changes are purely cosmetic: the dialog is bigger, so the description of each block is simpler to read, with a two-pane layout to ease both navigation and block selection. The building blocks (which are used to perform more complex conditions or sequences of actions, such as repeating actions or branching out your sequence into multiple paths) have been moved into the main dialog on a second tab. There is now a single entry point to add something to an automation instead of two, greatly reducing the number of buttons in complex automations. As mentioned above, these changes are purely cosmetic, for now! But this new dialog is the foundation of what’s coming next, and we cannot wait to present that to you once it finally lands. Naming entities on your dashboard A few releases ago, we gave the entity picker a big upgrade by adding more context so you could easily see where each entity belongs (May 2025 release). In this release, we’re bringing that same flexibility to your dashboards. You can now choose how names appear on your cards: show the entity, device, area, floor, or even combine them. This gives you full control over how your dashboards look and feel. For example, in a dedicated section for a specific device, you might choose to display only the entity name to avoid repeating the device name on every card. Of course, you can still set a custom name if you want complete control over the text shown. And the best part? If you rename an entity or device, your dashboards will automatically stay in sync. No more manual edits needed; everything just updates itself. Energy pie We’ve added a new layout to the devices energy graph: “pie” . You can toggle between the regular bar chart and the new pie chart by clicking the icon in the top-right corner. Doing this made the top-right corner of the other energy cards feel empty, so we used that space to display the total energy for the selected period. For example, if the date picker is set to today, the total solar energy for today will be displayed in the corner of the solar production graph card. Progress for Home Assistant and Add-on updates With this release, you can now track the progress of updates to Home Assistant and Add-ons (managed by the Supervisor)! The progress includes the stages of downloading and unpacking, so the time required will vary based on your internet speed, CPU performance, and system load. As a result, the progress is not reflected as perfectly linear, but it does still provide a good estimate of how far along the update is. Integrations Thanks to our community for keeping pace with the new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] and improvements to existing ones! You’re all awesome. New integrations We welcome the following new integrations in this release: Actron Air, added by @kclif9 Sunricher DALI, added by @niracler Sunricher DALI, a platform for managing and monitoring DALI-based lighting systems. Fing, added by @Lorenzo-Gasparini Fing integration provides network scanning, device detection, and presence monitoring capabilities using the Fing platform. Firefly III, added by @erwindouna Firefly III project, a free open source personal finance manager with full transaction management, budgets, categories, and reports. iNELS, added by @epdevlab iNELS smart home system to manage lighting, heating, and automation components for enhanced home control. Lunatone Gateway, added by @MoonDevLT Lunatone Gateway, enabling control and monitoring of DALI lighting systems through Lunatone’s DALI gateway interface. Meteo.lt, added by @xE1H Lithuanian Hydrometeorological Service (LHMT) to provide regional weather forecasts for locations in Lithuania.

Source: Domotique News L'IA va booster l'internet des objets. Une récente étude Verizon indique que 84 % des entreprises considèrent l’IA comme une technologie clé pour l’IoT, et surtout, 70 % affirment qu’elle a concrètement accéléré leurs déploiements.

Source: Home Assistant (Blog officiel) Last year, we laid out our vision for AI in the smart home, which opened up experimentation with AI in Home Assistant. In that update, we made it easier to integrate all sorts of local and cloud AI tools, and provided ways to use them to control and automate your home. A year has passed, a lot has happened in the AI space, and our community has made sure that Home Assistant has stayed at the frontier. We beat big tech to the punch; we were the first to make AI useful in the home. We did it by giving our community complete control over how and when they use AI, making AI a powerful tool to use in the home. As opposed to something that takes over your home. Our community is taking advantage of AI’s unique abilities (for instance, its image recognition or summarizing skills), while having the ability to exclude it from mission-critical things they’d prefer it not to handle. Best of all, this can all be run locally, without any data leaving your home! Moreover, if users don’t want AI in their homes, that’s their choice, and they can choose not to enable any of these features. I hope to see big tech take an approach this measured, but judging by their last couple of keynotes, I’m not holding my breath. Over the past year, we’ve added many new AI features and made them easy to use directly through Home Assistant’s user interface. We have kept up with all the developments in AI land and are using the latest standard to integrate more models and tools than ever before. We’re also continuing to benchmark local and cloud models to give users an idea of what works best. Keep reading to check out everything new, and maybe you can teach your smart home some cool new tricks. Local AI is making the home very natural to control Big thanks to our AI community contributor team: @AllenPorter, @shulyaka, @tronikos, @IvanLH, @Joostlek! Supercharging voice control with AI We were doing voice assistants before AI was cool. In 2023, we kicked off our Year of the Voice. Since then, we’ve worked towards our goal of building all the parts needed for a local, open, and private voice assistant. When AI became the rage, we were quick to integrate it. Today, users can chat with any large language model (LLM) that is integrated into Home Assistant, whether that’s in the cloud or run locally via a service like Ollama. Where Assist, our home-grown (non-AI) voice assistant agent, is focused on a predetermined list of mostly home control commands, AI allows you to ask more open-ended questions. Summarize what’s happening across the smart home sensors you’ve exposed to Assist, or get answers to trivia questions. You can even give your LLM a personality! Users can also leverage the power of AI to speak the way they speak, as LLMs are much better at understanding the intent behind the words. By default, Assist will handle commands first. Only questions or commands it can’t understand will be sent to the AI you’ve set up. For instance, “Turn on the kitchen light” can be handled by Assist, while “It’s dark in the kitchen, can you help?” could be processed by an AI. This speeds up response times for simple commands and makes for a more sustainable voice assistant. Another powerful addition from the past year is context sharing between agents. So your Assist agent can share the most recent commands with your chosen AI agent. This shared context lets you say something like “Add milk to my shopping list,” which Assist will act on, and to add more items, just say “Add rice.” The AI agent understands that these commands are connected and can act accordingly. Here is an excellent walkthrough video of JLo's AI-powered home, showing many of these new features in action Another helpful addition keeps the conversation going; if the LLM asks you a question, your Assist hardware will listen for your reply. If you say something like “It’s dark”, it might ask whether you’d like to turn on some lights, and you could tell it to proceed. We have taken this even further than other voice assistants, as you can now have Home Assistant initiate conversations. For example, you could set up an automation that detects when the garage door is open and asks if you’d like to close it (though this can also be done without AI with a very clever Blueprint). AI pushed us to completely revamp our Text-to-Speech (TTS) system to take advantage of streaming responses from LLMs. While local AI models can be slow, we use a simple trick to make the delay almost unnoticeable. Now, both Piper (our local TTS) and Home Assistant Cloud TTS can begin generating audio as soon as the LLM produces the first few words, improving the speed of the spoken response by a factor of ten. Prompt: “Tell me a long story about a frog” Setup Time to start speaking Cloud, non-streaming 6.62 sec Cloud, streaming 0.51 sec (13x faster) Piper, non-streaming 5.31 sec Piper, streaming 0.56 sec (9.5x faster) Ollama gemma3:4b on an RTX 3090, and Piper on an i5 Great hardware to work with AI People built some really cool voice hardware, from landline telephones to little talking robots, but the fact that it was so DIY was always a barrier to entry. To make our voice assistant available to everyone, we released the Home Assistant Voice Preview Edition. This is an easy and affordable way to try Home Assistant Voice. It has some seriously powerful audio processing hardware inside its sleek package. If you were on the fence about trying out voice, it really is the best way to get started. It’s now easier than ever to set up your Assist hardware to work with LLMs with our Voice Assistants settings page, and you can even assign a different LLM to each device. The LLM can recognize the room it’s in and the devices within it, making its responses more relevant. Assist was built to be a great way to control devices in your home, but with AI, it becomes so much more. AI-powered suggestions Last month, Home Assistant launched a new opt-in feature to leverage the power of AI when automating with Home Assistant. The goal is to shorten the journey from a blank slate to your finished idea. When saving an automation or script, users can now leverage the new Suggest button: When clicked, it will send your automation configuration along with the titles of your existing automations and labels to AI to suggest a name, description, category, and labels for your new automation. Over the coming months, we’re going to explore what other features can benefit from AI suggestions. To opt-in to this feature, you need to take two steps. First, you need to configure an integration that provides an AI Tasks entity. For local AI, you can configure Ollama, or you can also leverage cloud-based AI like Google, OpenAI, or Anthropic. Once configured, you need to go to the new AI Task preferences pane under System -> General and pick the AI Task entity to power suggestions in the UI. If you don’t configure an AI Tasks entity, the Suggest button will not be visible. AI Tasks gets the job done Enabling AI Tasks does more than quickly label and summarize your automations; its true superpower is making AI easy to use in templates, scripts, and automations. AI Tasks allow other code to leverage AI to generate data, including options to attach files and define how you want that data output (for instance, a JSON schema). We have all seen those incredible community creations, where a user leverages AI image recognition and analysis to detect available parking spots or count the number of chickens in the chicken coop. It’s likely that AI Tasks can now help you easily do this in Home Assistant, without the need for complex scripts, extra add-ons, or HACS integrations. Below is a template entity that counts chickens in a video feed, all via a short and simple set of instructions. template: - triggers: - trigger: homeassistant event: start - trigger: time_pattern minutes: "/5" actions: - action: ai_task.generate_data data: task_name: Count chickens instructions: >- This is the inside of my coop.

Source: Gladys Assistant (Blog officiel) La sécurité, c'est la base de la domotique. Aujourd'hui, une alarme complète débarque dans Gladys pour vous permettre de gérer la sécurité de votre maison.

Source: Home Assistant Community Forum (Latest) I just did a test migration to Z-Wave 1.0.1 following these instructions, which I realize might not be the final version. I don’t know if this is helpful, but I took some notes about a few things I found confusing. Overall, it was pretty smooth. Personally, I prefer configuring z-wave in ZUI and I’m not quite clear if that’s possible with the core app now. Will the old ZUI app still be available? (click for more details) 8 posts - 3 participants Read full topic

Source: Arduino Blog Artificial climbing walls are important for training, as few people can get to real rock walls regularly enough to keep up with practice. But like anything else, that can become boring if you’re just doing the same thing over and over again. To keep things fresh and fun, Superbender turned his indoor climbing wall into […]

Source: Gladys Assistant (Forum) bonjour.j’ai simplement lancé dans gladys une mise à jour en cliquant sur…mise à jour….et ma configuration a sauté.il ne reconnait plus mon mail.je suis temp-user…j’avais fait une copie du dossier gladys dans var/lib.est ce qu’une copie de ce dossier me redonnera l’acces? merci 21 messages - 3 participant(e)s Lire le sujet en entier

Source: Home Assistant Community Forum (Latest) I currently have a 1500 square foot 3 floor home with full wired and wireless network using VLAN’s and Home Assistant docker running on a 24/7 unRaid server. I currently have a number of Kasa WiFi switches and outlets in the system and would like to look at converting to Z-Wave for lighting control but have a couple of complications. There are 3 BLE ceiling fans with integrated lights that I would like to convert to Z-Wave control but haven’t figured out a simple way to accomplish this. I am also wondering if a standalone HA server would be better than running on the unRaid server? 6 posts - 3 participants Read full topic

Source: Arduino Blog Mark your calendars: we’re heading to Embedded World 2026 (Nuremberg, Germany – March 10-12) and we can’t wait to see you there! Visit us in Hall 3, Booth #555 for live demos, hands-on experiences, and – here’s the big one – a major product announcement you won’t want to miss. We’re unveiling something revolutionary, and Embedded World […]

Source: Home Assistant Community Forum (Latest) Hey community! I built a modular package system that adds enterprise-grade capabilities What it does C++ Alarm Engine — 4 severity levels (Nominal / Minor / Moderate / Critical) Zero-Config WiFi Motor — Improv BLE, Fallback AP, Captive Portal Full Hardware Telemetry — Free RAM, Memory Fragmentation, Loop Time, 4d 13h 25m Dynamic Identity — Rename all 30+ sensors from HA without recompiling Copy → Paste → Compile → Enjoy Architecture Just 3 files: File Role configuracion_base.yaml Master template — drop it into your ESPHome dashboard lib-wifi_conectividad.yaml WiFi + BLE + Watchdog engine lib-estados_alarmas.yaml alarm brain Compatibility ESP32 · ESP32-C3 · ESP32 WROVER · ESP8266 (partial, see README) Links GitHub: GitHub - RamiLEK001/ESPHome-Professional-Packages: Paquetes modulares optimizados para ESP32/ESP8266 en Home Assistant Feedback and contributions very welcome! 1 post - 1 participant Read full topic

Source: Gladys Assistant (Forum) Suite à la création d’une option “Favoris” dans les intégrations, je propose que l’ouverture de la page “Intégrations” se fasse par défaut sur les “Favoris” s’il y en a. ref : Gladys Assistant 4.68 : Matterbridge, intégrations favorites et Tasmota amélioré Actualités 1 message - 1 participant(e) Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour à tous, Une nouvelle version de Gladys Assistant est disponible Voici les nouveautés de cette version : Zigbee2mqtt : Ajout du nouveau driver Ember Zigbee2mqtt propose désormais un nouveau driver, Ember, pour certains dongles comme le dongle Sonoff ZBDongle-E par exemple. L’intégration Zigbee2mqtt dans Gladys vous permet désormais de sélectionner ce driver Ember pour les dongles compatibles. Si vous utilisez EZSP, Zigbee2mqtt ne touchera pas à votre installation sans action de votre part, pour éviter de casser votre installation. La stabilité est une valeur très importante du projet, et cette mise à jour a été pensé pour ne pas impacter votre quotidien. Si vous souhaitez passer à Ember, c’est possible, mais il faudra sûrement mettre à jour le firmware de votre dongle Zigbee avant. Par exemple, pour le Sonoff Dongle-E, l’intégration vous offre la possibilité de sélectionner le driver de votre choix entre « Ember » (le nouveau par défaut), et l’ancien EZSP : Si vous testez le nouveau driver, et que votre firmware n’est pas compatible, pas de panique, vous verrez un message : Vous pourrez alors soit mettre à jour le firmware, soit repartir sur EZSP en attendant. N’hésitez pas si vous avez des questions Un grand merci à @cicoub13 pour cette contribution ! Dashboard : Amélioration de l’affichage des capteurs d’ouverture de porte L’affichage des capteurs d’ouverture de porte sur le dashboard a été amélioré pour une meilleure lisibilité. Désormais, on affiche « Ouvert/Fermé », au lieu de la petite icône de cadenas qui n’était pas très lisible. Tasmota : Ajout du suivi énergétique Les appareils Tasmota sont désormais intégrés au suivi de l’énergie. Merci à @Terdious pour ce développement Pour mettre à jour, c’est automatique, ou vous pouvez forcer la mise à jour dans les paramètres. Bonne fin de semaine à tous ! 5 messages - 3 participant(e)s Lire le sujet en entier

Source: Home Assistant Community Forum (Latest) Hi, using the homewizard integration, to fetch my P1, but the default is 5 sec, but i need 1 sec… so following tutorial here: Home Assistant HomeWizard Instructions on how to integrate HomeWizard into Home Assistant. disabled the default, created an automation, i see its running in states every 1 sec, but the sensors are not updating at all, they fall back to 30 sec interval? Doesnt this service replace the interval? What am i missing?

  • alias: Update Device Automation initial_state: 'on' trigger: - platform: time_pattern seconds: /1 action: - service: homeassistant.update_entity target: entity_id: - sensor.p1_meter_power_phase_1 - sensor.p1_meter_power_phase_2 - sensor.p1_meter_power_phase_3 7 posts - 4 participants Read full topic

Source: openHAB Community (Latest) Hi, Can If ( !Ephemeris.isWeekend() ) Work or its something AI made up? Also, on the forum here I found info that I could use just isWeekend in dsl, without Ephemeris So in that case, would !isWeekend() work to trigger only on the workdays? 3 posts - 3 participants Read full topic

Source: Domoticz (Forum News) After losing my system... All of my auto backups where corrupt. I had to go back to a backup from 2023 - with the new 2025 domoticz. I've added back in all the new devices, and removed the old switches that are no-longer part of my system... However - whenever I reset the windows pc that is running domoticz, all my newly added devices disappear, and the old switches come back. Any ideas? Statistics: Posted by binbo — Thursday 26 February 2026 23:20 — Replies 1 — Views 136

Source: Domoticz (Forum News) Hi all, I would like to request a new dummy device type in Domoticz for measuring 3-phase voltage. Currently, we have a dummy device for a 3-phase current (ampere) meter, which works very well. It would be great to have a similar option for voltage, allowing users to input and display voltage values for all three phases. Thanks for considering! Statistics: Posted by highvoltage — Thursday 26 February 2026 22:14 — Replies 1 — Views 186

Source: Adafruit Blog LEM shared this project on MakerWorld! This is a useful and practical print, apparently I’m good at hiding things on the inside of cupboard doors. This solves the problem of those bulky boxes under your kitchen sink. We always buy the bulk pack to save a few coins, they take up a lot of space. […]

Source: Gladys Assistant (Forum) Ref cette discussion : Tri des scènes dans un widjet Les scènes sont triées alphabétiquement dans un widget, ça serait cool de pouvoir les organiser de manière non alphabétique sans avoir à jongler avec les noms des scènes. 1 message - 1 participant(e) Lire le sujet en entier

Source: Domoticz (Forum News) Hello, would it make sense to be able adding any analog values to the graph, not necessarily only related to environment sensors like values? For example it would be interesting to be combining external temperature and gas consumption for examble. By the way, what is the set point check box for? I don't have any values there although I do have thermostatic radiator valves with setpoint. Statistics: Posted by Patricen — Thursday 26 February 2026 15:56 — Replies 0 — Views 118

Source: Arduino Blog Control theory is beautiful on paper – elegant equations, perfectly modeled systems, textbook-perfect responses. But between the mathematical ideal and the physical system lies a gap that trips up many engineers: noise, timing constraints, actuator limits, and the stubborn reality of hardware that refuses to behave exactly as the model predicts. Cristian Castro Lagos, a […]

Source: Domoticz (Forum News) After some retries I succesfully migrated to beta build 17277 and converted to systemd control. Since that time my RFXCom won't start anymore stating that the serial port cannot be opened. The serial port is identified by serial/by-id and present in the harware configuration. Is there a relation to the (many) latest changes? Are there ways to discover the cause of the problem? Statistics: Posted by Doler — Thursday 26 February 2026 14:24 — Replies 2 — Views 132

Source: Adafruit Blog user_1850721599 shares: I like to create cute things to make life better. If you like my work, I hope to get your support. Thank you~ I like to create cute things to make life better. If you like my work, I hope to get your support. Thank you~ download the files on: https://makerworld.com/en/models/2327118-sweet-bow-silk-pen-holder Every Thursday […]

Source: Gladys Assistant (Forum) J’appelle certaines scènes avec « On » et « Off » p.ex. “Salon On” et “Salon Off” : Rien de très original j’en convient, ce qui donne ça : J’aimerai pouvoir mettre le « On » au dessus… Est-ce qu’il est possible pouvoir choisir de l’ordre des scènes ou est-ce que je fais une demande de fonctionnalité ? 5 messages - 3 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hello, 1 - Context Domoticz V2025.2 (build 17277) running on bookworm Build Hash fac6d8f Compile Date 2026-02-26 06:42:05 dzVents Version 3.1.11 Python Version 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] 2 - How to find domoticz log file I switched from the deprecated mean to create domoticz service using the /etc/init.d/domoticz.sh file to the preferred one in /etc/systemd/system/domoticz.service. Previously i recorded domoticz messages in "/var/log/domoticz.log" by setting in domoticz.sh file the line: DAEMON_ARGS="$DAEMON_ARGS -log /var/log/domoticz.log" Now when using /etc/systemd/system/domoticz.service I see the domoticz messages in domoticz UI but I am not able to find in which file they are stored. I also would like to define the file name supporting these messages. I would be grateful if someone could help me or give me a hint. BR Statistics: Posted by meal — Thursday 26 February 2026 10:24 — Replies 7 — Views 265

Source: Gladys Assistant (Forum) Bonjour, J’ai des capteurs d’ouverture, et je trouve l’icône de cadenas trop petite sur le tableau de bord, au point que j’ai du mal à distinguer un cadenas ouvert de fermé. 8 messages - 4 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour à tous, Lorsque j’ai ajouté en zigbee mon interrupteur pour volet roulant, j’aurai aimé que Gladys me propose à ce moment là de configurer ses paramètres de fonctionnement. Par exemple, j’ai un “temps de monté / descente” et le “sens du moteur” à configurer. Ça serait cool si Gladys pouvait nous faciliter la tâche. Une fois que l’on sait que l’on peut le faire depuis Z2M ça va, mais ça rendrait Gladys plus accessible pour les nouveaux (comme moi) 3 messages - 3 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) J’ai enfin (re)mis en service mon système de mesure de la quantité d’eau restante dans mes citernes et ai quelques améliorations du widget à demander. Serait-il possible que le nom soit éditable ou que la feature soit indiquée plutôt que le device ? Ici, je ne sais pas différencier les deux citernes sur le dashboard : Sur mobile, le widget empêche le défilement. J’explique… Si je mets mon doigt sur le widget, impossible de faire monter ou descendre la page, ce n’est pas “tactile”. Est-il possible d’améliorer cela lors de la prochaine amélioration UX ? 3 messages - 3 participant(e)s Lire le sujet en entier

Source: openHAB Community (Latest) Hi, I’m trying to upgrade from 5.0.3 to 5.1.3, but the changes to javascript have broken some of my rules and I don’t understand enough to figure it out. I have been using things like items.metadata.itemchannellink.replaceItemChannelLink(name, channeluid) items.getItem(nameofanitem).replaceMetadata(blah blah) and others from Home - openHAB JS When upgrading to 5.1.3, these rules fail when hitting these parts. Errors like: Failed to execute rule systemBuilder-Lights: TypeError: undefined has no such function “replaceItemChannelLink”: TypeError: undefined has no such function “replaceItemChannelLink” This feels like something to do with the new way things are injected? In the javascript settings page of the openhab UI, it looked like everything was set to automatic, but I don’t really understand what I’m looking at. Can anyone help? 5 posts - 3 participants Read full topic

Source: Domoticz (Forum News) NAS did a Watchtower update in Docker and Domoticz crashed Log: 2026-02-25 13:00:28.418 Launch: Begin container self-repair 2026-02-25 13:00:28.630 Launch: End container self-repair 2026-02-25 13:00:28.632 Launch: Running customstart.sh 2026-02-25 13:00:28.650 Status: Domoticz V2025.2 (build 17268) (c)2012-2026 GizMoCuz 2026-02-25 13:00:28.650 Status: Build Hash: 5bae973be, Date: 2026-02-25 10:10:03 2026-02-25 13:00:28.650 Status: Startup Path: /opt/domoticz/ 2026-02-25 13:00:28.665 Sunrise: 07:30:00 SunSet: 18:08:00 2026-02-25 13:00:28.665 Day length: 10:37:00 Sun at south: 12:49:00 2026-02-25 13:00:28.665 Civil twilight start: 06:56:00 Civil twilight end: 18:42:00 2026-02-25 13:00:28.665 Nautical twilight start: 06:17:00 Nautical twilight end: 19:21:00 2026-02-25 13:00:28.665 Astronomical twilight start: 05:38:00 Astronomical twilight end: 20:01:00 2026-02-25 13:00:28.706 Status: PluginSystem: Started, Python version '3.11.2', 4 plugin definitions loaded. 2026-02-25 13:00:28.716 Active notification Subsystems: email (1/13) 2026-02-25 13:00:28.718 Status: WebServer(HTTP) started on address: :: with port 80 corrupted size vs. prev_size 2026-02-25 13:00:28.719 Error: Domoticz(pid:1, tid:1('domoticz')) received fatal signal 6 (Aborted) 2026-02-25 13:00:28.719 Error: siginfo address=0x1, address=0x7f955511eeec Statistics: Posted by Huntback — Wednesday 25 February 2026 13:31 — Replies 7 — Views 317

Source: Domoticz (Forum News) Just updated my test system from 2025.2 17252 to 17265 and it fails to restart. Update counter goes all the way to 99 which is unusual. First i thought we had the old issue back again so did a stop and start from cmd line but nothing. Performing a stop and start and then a systemctl status gets me this, but no gui and despite it saying it's started none of my switches work so i'd say it's not actually running. Feb 25 07:21:47 domoticz systemd[1]: Starting domoticz.service - LSB: Home Automation System... Feb 25 07:21:47 domoticz domoticz.sh[1699517]: Time synchronized, starting Domoticz... Feb 25 07:21:47 domoticz domoticz.sh[1699525]: 2026-02-25 07:21:47.607 Status: Domoticz V2025.2 (build 17265) (c)2012-2026 GizMoCuz Feb 25 07:21:47 domoticz domoticz.sh[1699525]: 2026-02-25 07:21:47.608 Status: Build Hash: 66fd1d463, Date: 2026-02-25 05:51:29 Feb 25 07:21:47 domoticz domoticz.sh[1699525]: 2026-02-25 07:21:47.608 Status: Startup Path: /home/pi/domoticz/ Feb 25 07:21:47 domoticz domoticz.sh[1699525]: domoticz: Domoticz is starting up.... Feb 25 07:21:47 domoticz domoticz[1699525]: Domoticz is starting up.... Feb 25 07:21:47 domoticz systemd[1]: Started domoticz.service - LSB: Home Automation System. Update. Rolled back to 17252 and all good. Statistics: Posted by Dave21w — Wednesday 25 February 2026 8:25 — Replies 17 — Views 529

Source: openHAB Community (Latest) Hello. I found out that my OH3 has an delay of 30minutes in all Timestamps like network:ping last seen, in graphs created from values of Modbus (Smartmeter, frequency, Power, U,I) but it shows the correct ntp-time. The Result is that frequency and Powercharts are everytime shown half an hour behind the real local time and it seems OH-internal Timestamps (rrd-recordings) did the same. I checked the regional and ephemerial settings in the UI but i can’t find a misconfigured point. I need Help to find the point where this could be wrong sat up. Sorry, Did not write often in english (Germany) and the editor here is much slower as OH on my old Netbook there. Platform information: Hardware: Acer Aspire one (Atom 32bit) OS: Lubuntu openHAB version: 3.4.2 4 posts - 2 participants Read full topic

Source: openHAB Community (Latest) Hi all, as I’m using openHAB since more than 10 year regressions problems, and troubleshooting after upgrades are well known to me. So - as I tried to leverage AI agents to create a system to create testable rules and widgets - and I’m quite happy with the result. As usual, my time is very limited - especially as I want to finish the Jellyfin binding to make it available to all users (locally it is under testing with 5.1.2) … So I dared to task the AI to describe the setup - not ideal maybe, but I hope it serves as an inspiration on how to achieve a stable setup over time; and preferred to not sharing the idea at all. Note: There are quite some simplifications in the summary - as the whole container, network setup and AI agent setup is omitted. with kind regards, And now I give the word to Claude: Note: This article was automatically generated with AI. The rule modifications, tests, and troubleshooting workflows themselves are developed and refined using AI agents as coding partners. The approach documented here reflects our operational practices, not necessarily universal “best practices.” Introduction Home automation rules are complex. They integrate multiple devices, handle edge cases, and run 24/7 with minimal supervision. Yet many openHAB users manage rules the way they manage configuration files—manually, with minimal testing, and with limited visibility into execution. Our Setup: 90 production rule files, 6 shared library modules, 92 unit tests—all deployed and managed using the testing and deployment pipeline described below. This article describes our approach: automated regression testing with Jest, system endpoint testing with Puppeteer, live interaction via the openHAB REST API, and interactive troubleshooting with Karaf. Each tool addresses a specific part of the development lifecycle. Overview: The Rule Development Workflow Rule Code (JavaScript) ↓ Local Unit Tests (Jest) ↓ System Integration Tests (Puppeteer + REST API) ↓ Deploy to openHAB ↓ Live Troubleshooting (Karaf Console) Each tool serves a specific purpose: Jest → unit tests catch logic errors before deployment Puppeteer + REST API → integration tests validate end-to-end behavior Karaf → interactive debugging when things go wrong Part 1: Regression Testing with Jest Jest runs unit tests in milliseconds without requiring an openHAB instance. It handles the core logic layer: parsing, decision-making, state coordination. openHAB injects runtime globals (items, rules, events), so tests mock these to isolate the code under test. Test Structure: Unit Tests vs. Integration Tests (click for more details) Integration Tests: Testing System Interaction (click for more details) Setup: Package Configuration (click for more details) Testing Approach: Mocking and Isolation (click for more details) Part 2: End-to-End Testing with REST API and Puppeteer Integration tests validate the full lifecycle: command → item updates → rule triggers → side effects occur. The REST API and Puppeteer are the test interfaces to a real openHAB instance. Using the REST API for Integration Tests The openHAB REST API provides endpoints for items, rules, and logs. From Node.js: // Get item state const getItemState = async (itemName) => { const response = await fetch(http://localhost:8080/rest/items/${itemName}); return response.json(); }; // Trigger a rule const triggerRule = async (ruleUid) => { return fetch(http://localhost:8080/rest/rules/${ruleUid}/runnow, { method: "POST" }); }; These calls simulate what a user’s dashboard or external system would do. They verify rules respond correctly to real events. Puppeteer for UI Testing Puppeteer automates a browser to test dashboards and sitemap pages. Key techniques: page.goto() → navigate to openHAB page.click() → simulate user interaction page.waitForFunction() → wait for async state updates page.$eval() → query and verify DOM content The pattern is: send a REST command → wait for UI to update → verify the change is visible: // E2E: Verify UI reflects item state change await fetch(http://localhost:8080/rest/items/Light, { method: "POST", body: "ON" }); await page.waitForFunction(() => { const element = document.querySelector("[data-item='Light']"); return element?.textContent.includes("ON"); }); This catches issues that pure API tests miss: slow UI updates, missing bindings, incorrect state formatting. Part 3: Deployment: Rules vs. Libraries openHAB rules and shared libraries are deployed using fundamentally different mechanisms. This distinction matters: it affects how you structure code, how you test, and how you troubleshoot. Architecture: How Tests Feed into Deployment Before any code reaches openHAB, the deployment system performs comprehensive sanity checks: Source Code (rules/.js + metadata, lib/.js) ↓ [Test Assert] npm test - unit tests, error paths, edge cases ↓ [Sanity Check] Verify GraalVM compatibility (JSDoc + IIFE incompatibility) ↓ [Sanity Check] Verify dependencies exist (all required shared modules) ↓ [Sanity Check] Verify metadata IDs are present (triggers, actions must have IDs) ↓ [Rules] Strip IIFE wrapper + construct REST payload with metadata [Libs] rsync to node_modules with checksum verification ↓ REST API POST/PUT with proper action/trigger configuration ↓ [Sanity Check] Verify rule status transitions to IDLE (not UNINITIALIZED) ↓ [Production] Enable rule Shared Libraries: rsync Deployment with Verification Shared libraries (modules in lib/) deploy via rsync to the remote openHAB server’s node_modules directory: (click for more details) Rules: IIFE Stripping and REST API Deployment Rules require transformation: IIFE wrapper stripped (added for Jest, not needed in openHAB) Metadata merged (triggers, actions, tags from JSON) IDs generated for triggers and actions REST payload constructed with proper structure GraalVM cache cleared (delete/recreate) (click for more details) Metadata Separation and ID Generation (click for more details) GraalVM Cache Bypass Strategy (click for more details) Pre-Deployment Sanity Checks Before any REST API call, the deployment script validates: Check Failure Mode Files exist Obvious error, deployment stops Tests pass Catch logic errors in Jest before live Dependencies deployed Missing library → require() fails at runtime Trigger/action IDs present Missing IDs → rule status UNINITIALIZED Action type is correct Must be script.ScriptAction, not application/javascript No GraalVM incompatibilities JSDoc+IIFE pattern → silent execution skip Server responds API accessible, auth valid Rule reaches IDLE Indicates successful initialization Example: Missing Dependency Check

Verify required modules are deployed

for module in openhab-utils alert-tracker multimedia-constants; do if grep -q "require('$module')" rules/012d853fc5.js; then ssh openhab-host ls conf/automation/js/node_modules/$module/index.js || { echo " Module not deployed: $module" exit 1 } fi done Live Interaction: REST API for Testing and Monitoring After deployment, the REST API is your window into rule execution:

Trigger a rule manually

curl -X POST http://localhost:8080/rest/rules/012d853fc5/runnow # Check rule status curl http://localhost:8080/rest/rules/012d853fc5 | jq '.statusInfo' # List all rules and their status curl http://localhost:8080/rest/rules | jq '.[] | {uid, name, enabled, statusInfo}' // After stripping: rules/alert-stale-items.js (deployed to openHAB) ("use strict"); const alertTracker = require("alert-tracker"); const { safeGetItem } = require("openhab-utils"); const checkStaleItems = () => { // ... implementation ... }; // Direct call (openHAB rule engine triggers this) checkStaleItems(); How Stripping Works: The deployment script uses Python regex to detect and remove the IIFE: Arrow IIFE: ((exports) => { ... })(...) Removes the outer wrapper Removes the inner if (exports) { ... } export block Replaces the if (typeof event !== "undefined") { runRule(); } guard with direct call Function IIFE: (function () { ...

Source: Gladys Assistant (Forum) Bonsoir, Je viens de remarquer que je vois des conteneurs dans la page système de Gladys : Alors que j’ai que ceci : Je pense qu’il y a un petit problème 4 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonsoir, Précision sur le matériel utilisé : Pc kit de démarrage via @pierre-gilles Dongle zigbee sonoff P Prise zigbee IKEA Depuis hier après-midi mes prises ne remontent plus en temps réel la consommation, voire même elle se fige à une donnée. Une idée pour résoudre ce problème, s’il vous plaît ? Machine à laver éteinte. 26 messages - 4 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonsoir, J’ai dû déplacer mon serveur et en le remontant, j’ai dû me tromper de port USB pour mon dongle Sonoff… et donc lorsque je redémarre le tout, j’ai maintenant : Gladys log : Z2m log : J’ai essayé de troubleshooter mais je ne suis pas au top avec tout ça. Je veux bien une petite séance de troubleshootage qui pourra servir à tous j’espère Merci d’avance. 13 messages - 3 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) I've just updated my copy of domoticz. Stupidly - i have no idea what version I had. But I've updated to the latest version... 2025.2.16812 And i've noticed a device that used to work perfectly with an "off delay" of 1 second. Now triggers for too long - and no longer works. Statistics: Posted by binbo — Monday 23 February 2026 19:46 — Replies 7 — Views 373

Source: Gladys Assistant (Forum) Hello, Je ne sais pas si c’est depuis la mise à jour ou bien depuis ma modification de mqtt que j’ai passé dans docker mais plus aucun appareil zigbee2mqtt n’est controlable dans Gladys. J’ai pas mal de logs comme ceci dans les logs : 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "linkquality". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "power". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "state". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "voltage". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:09+0100 handleMqttMessage.js:109 () Failed to convert value for device Capteur air salon: Error: Zigbee2mqqt expose not found on device "Capteur air salon" with property "humidity". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient.

Source: Domoticz (Forum News) Hi I am using a 3 fase Amp device Type " Current, CM113,Electrisave ". It shows as expected the Amp values but if no current in all fases it just shows only one time zero. Is this as intended or is it a bug. -Bart Statistics: Posted by BartSr — Monday 23 February 2026 14:01 — Replies 2 — Views 229

Source: Gladys Assistant (Forum) Bonjour, J’essaie de découvrir mon Sonos Ray et je vois dans les logs ceci après plusieurs échecs : 2026-02-23T10:25:32+0100 errorMiddleware.js:68 (errorMiddleware) SonosDiscoveryError: No players found @svrooij/sonos/lib/sonos-device-discovery.js:96:24) @svrooij/sonos/lib/sonos-device-discovery.js:84:25) Il y a clairement un timeout mais cette erreur vient trop vite <10s. Merci d’avance. 11 messages - 3 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour, Suite à mon tutoriel ici : https://community.gladysassistant.com/t/connexion-cle-smlight-a-zigbee2mqtt-sur-gladys-full-ssl-tls-mqtt-et-z2m-externe J’ouvre cette demande de fonctionnalité concernant la prise en charge des clés SMLIGHT via le réseau sur le zigbee2mqtt de gladys Merci 1 message - 1 participant(e) Lire le sujet en entier

Source: Domoticz (Forum News) Hello, "domoticz.email" was working fine until early February. But strangely, no emails have been sent since. I haven't changed any settings. The "Test" function in the settings works correctly. Thinking it might be a problem with my ISP, I changed the email address, but it didn't help. A Python script on the same Raspberry Pi sends emails correctly using the original address. What's the problem? Notice that I stay in beta 17099, due to recent problem with the beta versions. Thank you Statistics: Posted by Filnet — Monday 23 February 2026 9:51 — Replies 4 — Views 409

Source: Domoticz (Forum News) Hello, Code: 2026-02-23 00:00:10.144 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 01:00:10.088 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 02:00:10.037 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 02:11:10.555 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 02:21:10.077 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 04:00:14.835 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 05:00:10.101 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 07:00:10.274 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 08:00:10.399 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 08:36:10.203 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds2026-02-23 09:00:13.276 Error: EventSystem: Warning!, lua script dzVents script (unknown - still executing) has been running for more than 10 seconds How find this "unknown''? Command "executeShellCommand" should be the reason? Thank for your help Statistics: Posted by Filnet — Monday 23 February 2026 9:40 — Replies 0 — Views 225

Source: openHAB Community (Latest) Hi, everyone. I’m running OH 4.3.1 on a Raspberry Pi (CENTOS 9). For a while, I haven’t been able to connect to my OH from Visual Studio Code (macOS Tahoe 26.2), but I didn’t need it much before. Now I need it for code completion, but I can’t seem to get it to work. I keep getting the following errors: openHAB Extension Output openHAB vscode extension has been activated [Error - 4:14:14 PM] Connection to server is erroring. Shutting down server. [Error - 4:14:14 PM] Connection to server is erroring. Shutting down server. Could not reload items for HoverProvider Could not reload items for Items Explorer Could not reload items for Things Explorer --- Error: Error while connecting to openHAB REST API. Message: Error: connect EHOSTUNREACH 10.18.18.102:8080 - Local (10.18.18.4:53185) --- --- Error: Error while connecting to openHAB REST API. Message: Error: connect EHOSTUNREACH 10.18.18.102:8080 - Local (10.18.18.4:53186) --- openHAB Language Server Output (node:11365) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead. (Use Code Helper (Plugin) --trace-deprecation ... to show where the warning was created) Error: getaddrinfo ENOTFOUND openhabianpi at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:122:26) { errno: -3008, code: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: 'openhabianpi' } OH is running. I can connect via a web browser and via SSH from my Mac’s Terminal. I have deleted all global (User) connection settings and left only the ones at the Workspace level (.vscode/settings.json), as suggested in VS Code OH5 - Error while connecting to openHAB REST API. I also tried the other way around, with the connection strings in the User configuration, to no avail. "openhab.connection.host": "10.18.18.102", "openhab.connection.authToken": "oh.VisualStudioCL.a0XXXXXXXXXXXXX”, "openhab.connection.port": 8080 The other settings are not in the textual configuration, and adding them there doesn’t change the outcome. I have checked that the ports are open in the firewall, and that they are listening using TCP ss -tuln Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port tcp LISTEN 0 50 *:5007 : tcp LISTEN 0 50 :8080 : tcp LISTEN 0 128 [::]:22 [::]: tcp LISTEN 0 50 *:8443 : On OH, the LSP is listening on the same port: I even deleted the old token and created a new one. However, I have not configured any ‘openhabianpi’ anywhere, so I don’t know where the LSP is getting that from. I have restarted VS Code multiple times, as well as deleted the extension and installed it again. What else am I overlooking? 2 posts - 1 participant Read full topic

Source: Gladys Assistant (Forum) Feature Thermostat complète trouver une autre icone pour le mode Hors Gel, je voulais juste un flocon mais pas présente dans le jeux d’icône Feather) Apparemment il nous manque 25 icônes par rapport à la dernière version (4.29.0) mais pas vu de flocons non plus. 3 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour à tous, dernièrement on a eu quelques soucis avec l’intégration Airplay. La lib utilisée pour se connecter à l’enceinte était un peu vieille, je l’ai forkée pour essayer de la garder à jour et la faire fonctionner avec les dernières versions de Node, mais une grosse partie du code n’est pas propre et ça passait par le binding de fonctions C++ ce qui rendait la maintenance complexe. nouvelle lib qui fait la même chose en 100% Typescript, j’ai basculé dessus ça fonctionne aussi bien et ça sera plus simple à maintenir plus tard. Si certains utilisateurs Airplay veulent tester une image docker est disponible bertrandda/gladys:airplay. 2 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Hello, On pourra ainsi archiver les anciens sujets si c’est utile et permettre à @pierre-gilles d’y accéder plus rapidement. 5 messages - 4 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) The systemd script in /etc/systemd/system is: Code: [Unit]Description=Domoticz Home Automation ServiceAfter=network-online.target[Service]Type=simpleUser=piGroup=piWorkingDirectory=/home/pi/domoticzExecStart=/home/pi/domoticz/domoticz -daemon -www 8080 -sslwww 443 -pidfile /var/run/domoticz/domoticz.pidRestart=on-failure[Install]WantedBy=multi-user.target The result of starting the systemd script: Code: domoticz.service - Domoticz Home Automation Service Loaded: loaded (/etc/systemd/system/domoticz.service; enabled; preset: enabled) Active: inactive (dead) since Sun 2026-02-22 14:10:25 GMT; 5min ago Duration: 191ms Invocation: 25c5fb4ab80c413489635312fe0ac205 Process: 1079 ExecStart=/home/pi/domoticz/domoticz -daemon -www 8080 -sslwww 443 -pidfile /var/run/domoticz/domoticz.pid (code=exited> Main PID: 1079 (code=exited, status=0/SUCCESS) CPU: 545msFeb 22 14:10:23 rpidhcpserver systemd[1]: Started domoticz.service - Domoticz Home Automation Service.Feb 22 14:10:23 rpidhcpserver domoticz[1079]: 2026-02-22 14:10:23.847 Status: Domoticz V2025.2 (c)2012-2025 GizMoCuzFeb 22 14:10:23 rpidhcpserver domoticz[1079]: 2026-02-22 14:10:23.847 Status: Build Hash: e63981b18, Date: 2025-10-13 10:42:57Feb 22 14:10:23 rpidhcpserver domoticz[1079]: 2026-02-22 14:10:23.847 Status: Startup Path: /home/pi/domoticz/Feb 22 14:10:23 rpidhcpserver domoticz[1079]: domoticz: Domoticz is starting up....Feb 22 14:10:23 rpidhcpserver domoticz[1079]: Domoticz is starting up....Feb 22 14:10:23 rpidhcpserver domoticz[1080]: Domoticz is exiting...Feb 22 14:10:25 rpidhcpserver systemd[1]: domoticz.service: Deactivated successfully. If I run the execstart code in the command line, the domoticz daemon is created. Why can't systemd do the same? Code: home/pi/domoticz/domoticz -daemon -www 8080 -sslwww 443 -pidfile /var/run/domoticz/domoticz.pid Result of running in commandline as user pi: Code: pi@rpidhcpserver:~/domoticz $ /home/pi/domoticz/domoticz -daemon -www 8080 -sslwww 443 -pidfile /var/run/domoticz/domoticz.pid2026-02-22 14:17:42.839 Status: Domoticz V2025.2 (c)2012-2025 GizMoCuz2026-02-22 14:17:42.839 Status: Build Hash: e63981b18, Date: 2025-10-13 10:42:572026-02-22 14:17:42.839 Status: Startup Path: /home/pi/domoticz/domoticz: Domoticz is starting up.... I did use systemd tmpfiles.d to create /run/domoticz. /etc/tmpfiles.d/domoticz.conf contains Code: #Type Path Mode UID GID Age Argumentd /run/domoticz 0755 pi pi - - Thanks, Chris Statistics: Posted by cmisip — Sunday 22 February 2026 15:21 — Replies 1 — Views 330

Source: Gladys Assistant (Forum) Bonjour, le dongle “sonoff z-wave 800 dongle plus” model: dongle-PZG23 n’est pas reconnu par gladys dans Z-WAVE JS UI. Merci 2 messages - 2 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hi, Below I will explain how I managed to obtain an approximate daily gas consumption by transforming the electrical energy consumed by the gas boiler. The boiler is monitored by a smart plug and I created a custom sensor in domoticz that records daily gas consumption. To be able to obtain an approximate gas consumption, you must first monitor the consumption on the gas meter for a day. Similarly, you monitor the energy consumption in Kw for that day. Daily Gas consumption = Daily energy consumption of the gas boiler (in Kilowatt) x Multiplier Calculate daily consumption smart plug (counterToday in domoticz) x Multiplier, for example in my case the multiplier value is 21 to obtain the approximate gas consumption, but you can also test with other values ​​until it gives the correct result. Then create the following dzvents script: Code: return {active = true,on = {devices = { 139 } -- idx smart plug},execute = function(domoticz, device)------------------------------------------------------- CONFIGURATION-----------------------------------------------------local MULTIPLICATOR = 21 -- multiplier replace with your own valuelocal GAZ_IDX = 520 -- replace with the idx of your sensor Custom Sensor "Estimated Gas Consumption"---------------------------------------------------------------------- MAIN LOGIC--------------------------------------------------------------------local val = device.counterToday or 0local consumKWh = 0if type(val) == "string" thenconsumKWh = tonumber(val:match("([%d%.]+)")) or 0elseconsumKWh = tonumber(val) or 0endlocal consumGaz = consumKWh * MULTIPLICATORlocal gazDevice = domoticz.devices(GAZ_IDX)if gazDevice thengazDevice.updateCustomSensor(consumGaz)domoticz.log(string.format("idx[%d]: %.3f kWh → Gas estimated: %.3f m³ (x%.3f)",device.id, consumKWh, consumGaz, MULTIPLICATOR), domoticz.LOG_INFO)elsedomoticz.log(string.format("Error: we did not find the device with idx %d", GAZ_IDX), domoticz.LOG_ERROR)endend} LE: in my case only the gas boiler using gas. I hope it helps you too. Statistics: Posted by pfloryann — Saturday 21 February 2026 12:25 — Replies 5 — Views 406

Source: Domoticz (Forum News) 2026-02-21 09:45:32.250 [e6df21e0] Error: SamSam: Error getting http data! (info) This is all I get from the debug logging, not much info. Tried with my second pi with domoticz, the same Statistics: Posted by tonbor — Saturday 21 February 2026 9:51 — Replies 3 — Views 230

Source: Domoticz (Forum News) Hello I'm running Domoticz in docker and I see this in the startup log Code: Status: PluginSystem: Failed dynamic library load, install the latest libpython3.x library that is available for your platform. About Domoticz v2025.2 (build 17204) System Information Build Hash 71a61ff22 Compile Date 2026-02-20 11:33:19 dzVents Version 3.1.8 Python Version None I enter the command prompt for the docker and it said it's installed. root@a78701199ed2:/opt/domoticz# apt install libpython3-stdlib Reading package lists... Done Building dependency tree... Done Reading state information... Done libpython3-stdlib is already the newest version (3.11.2-1+b1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Statistics: Posted by Varazir — Friday 20 February 2026 22:20 — Replies 4 — Views 356

Source: Domoticz (Forum News) Hi, As the title says, I’m currently working on a new theme for personal use. It doesn’t have a name yet. The theme is fully responsive and works on both desktop and mobile devices. All settings are stored in a virtual text device. The blocks can be moved between columns within the same column structure, and columns themselves can also be repositioned. Column reordering is only available on desktop. On mobile, the layout follows the saved (static) configuration and cannot be rearranged. This is mainly to prevent conflicts with swipe gestures on phones and tablets. The weather icons dynamically reflect the current conditions. For example, freezing temperatures show an ice icon, rain displays a rain icon, and so on. Development has only just started, so please don’t ask for a download yet — it’s not finished. At the moment, it uses a room with favorites, but switching to another room is straightforward and easy to configure via a config file. It takes all (favorite) devices/sensors automatically from inside the room. Each block includes an instant update “glow” indicator with a timestamp of the last update, so you can instantly see live activity. So far i made it as test, but we using it on all phones and tables and it works great. Still needs work to do, but for what we need, it works excellent. If you’re interested, let me know. Depending on any feedback or tips, I might take this to the next level and consider sharing it. Barry Desktop: Mobile: Statistics: Posted by BarryT — Friday 20 February 2026 19:34 — Replies 12 — Views 605

Source: Gladys Assistant (Forum) Hello, J’ai actuellement un dongle SONOFF Zigbee 3.0 USB Dongle Plus ZBDongle-P qui me sert à appairer tous mes appareils Zigbee. J’aimerais maintenant appairer mes appareils Matter Ikea, et je voulais avoir la confirmation que je dois soit : acheter une passerelle DIRIGERA Ikea par exemple flasher mon dongle avec un firmware qui permet de prendre en compte Matter acheter un autre dongle SONOFF Zigbee 3.0 USB Dongle Plus ZBDongle-E qui lui gère Matter ??? Merci d’avance, 2 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Hello, J’aimerais que mon détecteur de présence Zigbee n’allume le salon automatiquement que s’il est déclenché avant le lever du soleil ou après son coucher. J’ai essayé de faire ça : mais je me dis que faire ça pour chaque jour n’est pas très pratique. Est-ce que je m’y prends mal ? Y a-t-il une solution plus simple et élégante ? Merci d’avance 3 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Hello, Tout fonctionne bien, mais j’aimerais accéder à l’UTL de Z2M donné ici : J’ai testé avec 2 browsers, avec l’IP ou le nom du server, mais timeout chaque fois. Je ne sais pas ce qui se passe. Merci d’avance. 3 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Hello, j’ai pour projet de commander mes chauffage de ma salle de jeux via Gladys du coup là j’ai tout : Mon bridge DIRIGERA configuré dans Gladys mon module NODON à l’installation le module NODON qui a été automatiquement connecté à mon réseau sans aucune action de ma part. dans Gladys, dans Matter j’ai bien un nouveau périphérique par contre je n’ai que 2 fonctionnalités Commutateur et Index L’index semble bien remonter la conso, par contre le commutateur n’a au impact sur le chauffage ma quesion est donc la suivante : ai raté un truc ou ma configuration actuelle ne me permettra pas mieux ? Merci Greg 20 messages - 3 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Version: 2025.2 (build 17175) Platform: Docker on NAS Hello, since the last update, the button for viewing a rainfall report, which used to be located in the top right corner of the rain sensor log screen, has disappeared. Will it be possible to restore it in the next version? Thank you. Statistics: Posted by Bimnomercy — Friday 20 February 2026 7:16 — Replies 4 — Views 339

Source: Domoticz (Forum News) Hi all, since the update to v2025.2 (build 17186), all my dzvents lua automations are not triggered anymore. For example I see that a switch changes state, but the lua script is not triggered. (Also the CPU is much lower than before the update) Anybody else seeing the same issue? Statistics: Posted by rugspin — Thursday 19 February 2026 19:27 — Replies 6 — Views 363

Source: Domoticz (Forum News) Hello, I have a power widget, used only for power, it mesure 700W during 3 hours by days. It working fine, no problem with him, but if I go to logs, the last one if the "comparative" by years, but what is the value compared, it's something like 7 000, 00 W It's the power * hour * days ? but the value is not the good one. I realy have no clue where this value is from .... Statistics: Posted by Thorgal789 — Thursday 19 February 2026 17:33 — Replies 5 — Views 373

Source: Embedded.com Renesas Electronics Corporation has announced the development of a configurable ternary content-addressable memory (TCAM) implemented using a 3-nm FinFET manufacturing process. The new design combines increased storage density, reduced power consumption, and enhanced functional safety, positioning it for use in automotive environments. The company presented its results at the International Solid-State Circuits Conference 2026, held [...]

Source: Domoticz (Forum News) Hi, Has anyone managed to integrate LG ThinQ into Domoticz? Statistics: Posted by pfloryann — Wednesday 18 February 2026 17:23 — Replies 0 — Views 188

Source: Domoticz (Forum News) Hi! I am facing problems when I try to use json Domoticz in docker. Code: services: domoticz: image: domoticz/domoticz:stable container_name: domoticz restart: unless-stopped # Pass devices to container devices: - "/dev/serial/by-id/usb-RFXCOM_RFXtrx433_A1BFAMG-if00-port0:/dev/ttyUSB0" ports: - "8080:8080" volumes: - ./config:/opt/domoticz/userdata/ # regel toegevoegd om extra schermen (custom) te bereiken - ./custom:/opt/domoticz/www/custom environment: - TZ=Europe/Amsterdam #- LOG_PATH=/opt/domoticz/userdata/domoticz.log This does not work (404 error) (input from browser IP# and port are correct) Code: http://192.168.2.100:8080/domoticz/json.htm?type=devices Chatgpt says Code: Important PointIn the Domoticz Docker image domoticz/domoticz:stable from version 2025.2:The old /json.htm API is not available in this container build.This image is mainly intended for GUI + plugins.Any scripts that try to fetch data via /json.htm will always return 404.So, regardless of roomplan or idx, you cannot use this API. any idea? chatgpt propose to downgrade to older container which I don't think being a real solution any suggestions? Statistics: Posted by BartSr — Wednesday 18 February 2026 12:10 — Replies 6 — Views 359

Source: Domoticz (Forum News) Hello, i'm receiving notifications through Telegram but i'm unable to find where they are coming from. The history: I do have trvs that were "eating" batteries. I then decided to send a notification with a specific message as soon as the temperature was not reported after a certain time. I fixed the issue for the trvs, removed the notification but i'm still receiving it. Any clue? Thanks Statistics: Posted by Patricen — Tuesday 17 February 2026 19:41 — Replies 4 — Views 284

Source: IoT Business News At MWC Barcelona 2026, Telit Cinterion will demonstrate its CMB100 embedded modem and NExT eSIM technology, highlighting innovations in IoT connectivity, global deployments, and edge intelligence for mission-critical applications.

Source: Domoticz (Forum News) Hello, I had problems with the USB P1 interface- CRC errors . I changed to a LAN interface. It now records data correctly. The annual consumption is shown correctly, but the comparison is incorrect. Since the CRC problems start with october 2025, I have incorrect comparisons of october and december. November is shown correctly. The same is for january 2026. How can i repair data for compared usage. The same is for Gas and Electricity. Statistics: Posted by nabru99 — Tuesday 17 February 2026 10:12 — Replies 0 — Views 230

Source: Domoticz (Forum News) After a database restore Domoticz doesn't come online, no devices found or no communication... Did a service restart, after that domoticz was running normal. Tried this with a database backup and restore (no database changes) and after a powercycle of the Rpi3B+. Seen this also on earlier versions of the beta (2025.2 17057, 2025.2 17099 and 2502.2 17189). Attached is a crash log after a database restore. domoticz_crash.log Statistics: Posted by Rik60 — Monday 16 February 2026 10:11 — Replies 5 — Views 334

Source: Domoticz (Forum News) Hello , I have many lines like that in my journald logs and wonder how to prevent that ! No debug are enabled on the python plugin and domoticz process is started like that [code]root 984 1 53 10:39 ? 00:02:50 /home/pi/domoticz/domoticz -daemon -www 8080 -sslwww 443 -syslog[/code] [quote]Feb 14 10:29:28 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 912 Name: Piscine - Filtration, Type: 244, subType: 73, switchType: 0, s_value: , n_value: 0, n_value_string: Off, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:29 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 917 Name: Piscine - PH Value, Type: 243, subType: 31, switchType: 0, s_value: 0, n_value: 0, n_value_string: 0, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:29 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 916 Name: Piscine - PH, Type: 243, subType: 22, switchType: 0, s_value: Waiting values, n_value: 0, n_value_string: Waiting values, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:29 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 919 Name: Piscine - Redox Value, Type: 243, subType: 31, switchType: 0, s_value: 0, n_value: 0, n_value_string: 0, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:29 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 918 Name: Piscine - Redox, Type: 243, subType: 22, switchType: 0, s_value: Waiting values, n_value: 0, n_value_string: Waiting values, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:29 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 900 Name: General Elec - Total, Type: 243, subType: 29, switchType: 0, s_value: 2081;10052200, n_value: 0, n_value_string: 2081;10052200, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:30 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 769 Name: ECS - SHW Control, Type: 244, subType: 62, switchType: 18, s_value: 20, n_value: 1, n_value_string: Auto, last_update_string: 2026-02-14 10:29:28 Feb 14 10:29:30 CasaiaProV4-test sudo[11131]: pam_unix(sudo:session): session closed for user root Feb 14 10:29:30 CasaiaProV4-test domoticz[10799]: Python: Changed: ID: 770 Name: ECS - SHW Setpoint, Type: 242, subType: 1, switchType: 0, s_value: 65, n_value: 1, n_value_string: 65, last_update_string: 2026-02-14 10:29:28[/quote] Statistics: Posted by pipiche — Saturday 14 February 2026 10:45 — Replies 0 — Views 172

97. EN v2.0.6

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.6: Dive Deeper into Your Movies! This update brings a significant enhancement to the HomeFlix widget, making your media experience even richer. What's New: New Movie Detail Screen: You can now access detailed information for each title. Cast & Crew: View director and production details. Meet the Stars: Browse through actor names and photos directly from the UI. UI Optimizations: Various small tweaks and performance improvements for a smoother navigation experience. Full Changelog: v2.0.5...v2.0.6

Source: Domoticz (Forum News) Last year february 12 I moved house. Now these values are in the database causing a huge usage in feb 2025: Code: sqlite> SELECT * FROM Meter_calendar WHERE DeviceRowID=8 AND Date>='2025-01-28' AND Date<'2025-03-08';DeviceRowID|Value|Counter|Date8|2534|3087874|2025-01-288|1387|3089261|2025-01-298|2728|3091989|2025-01-308|2265|3099687|2025-02-038|2912|3102599|2025-02-048|1616|3104215|2025-02-058|1983|3106198|2025-02-068|3332|3109530|2025-02-078|2014|3113587|2025-02-098|3627|3117214|2025-02-108|1126|3654334|2025-02-13 <<<8|4587|3658921|2025-02-148|4498|3663419|2025-02-158|2373|3665792|2025-02-168|2933|3668725|2025-02-178|2762|3671487|2025-02-188|3337|3674824|2025-02-198|4906|3679730|2025-02-208|798|3680528|2025-02-218|2299|3682827|2025-02-228|363|3683190|2025-02-238|2116|3685306|2025-02-248|2492|3687798|2025-02-258|975|3688773|2025-02-268|1184|3689957|2025-02-278|879|3690836|2025-02-288|3994|3694830|2025-03-018|1689|3696519|2025-03-028|589|3697108|2025-03-03 The actual usage is around 60 m3 and not 600 m3 that is in the graph. To get an idea how the month total is calculated I shift-clicked removed a few day -totals. That did not make a difference in the month total. My question is: Can the feb month value be corrected by changing Counter -numbers? Without touching the January and March month totals. Probably the '2025-02-03' ? Thanks. Statistics: Posted by hans1612 — Friday 13 February 2026 14:53 — Replies 3 — Views 186

Source: IoT Business News IoT project teams face challenges like device integration delays, data inconsistencies, and security disruptions. Adopting standards, automation, and scalable practices improves productivity and project success.

Source: Domoticz (Forum News) My "wishlist" for an update of the energy dashboard: 1. In the settings for the energy dashboard you can define up to three custom widgets that are visually linked to the home consumption bubble on the right hand side. This is fine for consuming device like a heat pump (see screenshot). But for devices that are producing power it would be more logical to link them to the upper bubble with solar power. In my case, this is the sum of a small + big solar power plant. 2. And it would be nice if the font size of the text device is bigger (in my case the weather description) Statistics: Posted by imautohuttraeger — Thursday 12 February 2026 9:25 — Replies 0 — Views 235

Source: Domoticz (Forum News) I have a Dzvents script that will update a user variable every minute and makes a note of the variable update in the log: 2026-02-11 13:51:00.735 Status: Set UserVariable nomotionCounter = 380 Is it possible not to update the log with (this) User variable updates? As this is information that I don't really need in the log. Statistics: Posted by jberinga — Wednesday 11 February 2026 14:42 — Replies 5 — Views 194

Source: Domoticz (Forum News) I have an infraread reading head on my electric meter ("Hichi" with Tasmota). Transfering the data via MQTT to Domotics works, but for at least one of the figures I get this error: Error: Invalid Number sValue: '%' for device idx: '%' Error: GetJSonDevices: exception occurred : 'stoull' For those of you who are savvy with Tasmota: This is the code of the customized script that runs there and it displays correct data in the Tasmota web interface: >D >B =>sensor53 r >M 1 +1,3,s,0,9600,,1 1,77070100010800FF@100000000,Zählerstand Import ,kWh,1_8_0,8 1,77070100010801FF@1000,Zählerstand Import T1 ,kWh,1_8_1,8 ;1,77070100010802FF@100000000,Energie Bezug T2,kWh,1_8_2,8 1,77070100020800FF@100000000,Zählerstand Export,kWh,2_8_0,8 1,77070100100700FF@1,Summe Verbrauch ,W,16_7_0,16 1,77070100240700FF@1,akt. Verbrauch L1,W,36_7_0,16 1,77070100380700FF@1,akt. Verbrauch L2,W,56_7_0,16 1,770701004C0700FF@1,akt. Verbrauch L3,W,76_7_0,16

But even the non-customized Script from the Tasmota wiki shows the same error: https://tasmota.github.io/docs/Smart-Me ... d3-obissml Any hints are highly welcome Statistics: Posted by imautohuttraeger — Tuesday 10 February 2026 12:30 — Replies 10 — Views 291

Source: Domoticz (Forum News) When I stop domoticz manually I get this error, anyone any idea how to solve this or what the cause is? Code: 2026-02-08 11:11:03.286 Error: Domoticz(pid:677, tid:677('domoticz')) received fatal signal 6 (Aborted)2026-02-08 11:11:03.286 Error: siginfo address=0x2a5, address=0x7f04e28c495c2026-02-08 11:11:03.295 Error: Failed to start gdb, will use backtrace() for printing stack frame2026-02-08 11:11:03.300 Error: #0 /home/eddy/domoticz/domoticz : + 0x454b23 [0x557cbca5cb23]2026-02-08 11:11:03.300 Error: #1 /home/eddy/domoticz/domoticz : signal_handler(int, siginfo_t*, void*) + 0x245 [0x557cbca5d5b5]2026-02-08 11:11:03.300 Error: #2 /lib/x86_64-linux-gnu/libc.so.6 : + 0x3fdf0 [0x7f04e286fdf0]2026-02-08 11:11:03.300 Error: #3 /lib/x86_64-linux-gnu/libc.so.6 : + 0x9495c [0x7f04e28c495c]2026-02-08 11:11:03.300 Error: #4 /lib/x86_64-linux-gnu/libc.so.6 : gsignal + 0x12 [0x7f04e286fcc2]2026-02-08 11:11:03.300 Error: #5 /lib/x86_64-linux-gnu/libc.so.6 : abort + 0x22 [0x7f04e28584ac]2026-02-08 11:11:03.300 Error: #6 /lib/x86_64-linux-gnu/libpython3.13.so : + 0x99fa3 [0x7f04e1499fa3]2026-02-08 11:11:03.300 Error: #7 /lib/x86_64-linux-gnu/libpython3.13.so : + 0x2b15f7 [0x7f04e16b15f7]2026-02-08 11:11:03.300 Error: #8 /lib/x86_64-linux-gnu/libpython3.13.so : PyEval_RestoreThread + 0xab [0x7f04e1684ecb]2026-02-08 11:11:03.300 Error: #9 /home/eddy/domoticz/domoticz : Plugins::CPluginSystem::StopPluginSystem() + 0xbb [0x557cbcf58d9b]2026-02-08 11:11:03.300 Error: #10 /home/eddy/domoticz/domoticz : MainWorker::Stop() + 0x15c [0x557cbca23a3c]2026-02-08 11:11:03.300 Error: #11 /home/eddy/domoticz/domoticz : main + 0x5c5 [0x557cbc922e75]2026-02-08 11:11:03.300 Error: #12 /lib/x86_64-linux-gnu/libc.so.6 : + 0x29ca8 [0x7f04e2859ca8]2026-02-08 11:11:03.300 Error: #13 /lib/x86_64-linux-gnu/libc.so.6 : __libc_start_main + 0x85 [0x7f04e2859d65]2026-02-08 11:11:03.300 Error: #14 /home/eddy/domoticz/domoticz : _start + 0x21 [0x557cbc949b41] My system is Dietpi running as VM under Proxmox with: Code: 2026-02-08 11:16:29.414 Status: Domoticz V2025.2 (build 17082) (c)2012-2026 GizMoCuz2026-02-08 11:16:29.414 Status: Build Hash: 0776fa964, Date: 2026-02-07 15:03:33 Code: PRETTY_NAME="Debian GNU/Linux 13 (trixie)"NAME="Debian GNU/Linux"VERSION_ID="13"VERSION="13 (trixie)"VERSION_CODENAME=trixieDEBIAN_VERSION_FULL=13.3ID=debianHOME_URL="https://www.debian.org/"SUPPORT_URL="https://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/" Code: Linux domoprox 6.12.63+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.63-1 (2025-12-30) x86_64 GNU/Linux Statistics: Posted by Kedi — Sunday 08 February 2026 11:20 — Replies 6 — Views 435

Source: Domoticz (Forum News) I have https://www.zigbee2mqtt.io/devices/TS06 ... witch.html. Zigbee2mqtt supports it and I can get intensity as well as color temperature data.{ "adjustment_mode": "brightness", "color_temp": 178, "group_id": null, "linkquality": 174, "mode": null, "power_on_behavior": null, "state": null, "state_l1": null, "state_l2": null, "switch_mode_l1": null, "switch_mode_l2": null, "brightness": 140 }In Domoticz I can't get a CCT value. Am I doing something wrong? How can I get a CCT(color_temp) value in a Domoticz? Statistics: Posted by palnic — Friday 06 February 2026 17:09 — Replies 0 — Views 216

Source: Domoticz (Forum News) Hey, I have a power meter where I sometimes see weird values, this is caused by my input script, fixing that separately. But when I try to delete it from the graph, it doesn't work, only when deleting it using sqlite. See database value below. Can this be improved? It is the line dated 2026-02-06 05:40:00 that is wrong. Code: 499|30221370|0|2026-02-06 05:30:00|0.2515499|30221370|0|2026-02-06 05:35:00|0.2515499|22256300|0|2026-02-06 05:40:00|0.2515499|30221370|0|2026-02-06 05:45:00|0.2515499|30221370|0|2026-02-06 05:50:00|0.2515 Statistics: Posted by JanJaap — Friday 06 February 2026 14:57 — Replies 0 — Views 205

Source: Home Assistant (Blog officiel) Home Assistant 2026.2! February is the month of love, and this release is here to share it! The new Home Dashboard is now the official default for all new installations. If you’ve been using Home Assistant for a while and never customized your default view, you’ll get a suggestion to switch; give it a try! I also need your help! The Open Home Foundation device database is being built as a community-powered resource to help everyone make informed decisions about smart home devices. Head to Home Assistant Labs to opt in and contribute your anonymized device data. Add-ons are now called Apps! After a lot of community discussion, it was time to use terminology that everyone understands. Your TV has apps, your phone has apps, and now Home Assistant has apps too. My personal favorite this release? The completely redesigned Quick search! If you’re like me and navigate Home Assistant using your keyboard, you’re going to love this one. Press ⌘ + K (or Ctrl + K on Windows/Linux) and you have instant access to everything. Enjoy the release! ../Frenck A new way to view your home Discovered devices at a glance Area assignments made easy Faster area edits UX and visual upgrades Device database: We need your help! Help us out and share your devices See the data in action Join us in building something meaningful Add-ons are now called Apps A faster, snappier Apps panel Purpose-specific triggers and conditions progress New triggers New conditions A brand new card: The distribution card Quick search: The fastest way to anything Your favorite shortcuts still work Integrations New integrations Noteworthy improvements to existing integrations Integration quality scale achievements Now available to set up from the UI Other noteworthy changes Add buttons to your heading card Pick specific entities in your area card Patch releases 2026.2.1 - February 6 2026.2.2 - February 13 2026.2.3 - February 20 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @laupalombi and @mkerstner who helped write the release notes this release. Also, @wollew, @Diegorro98, and @MindFreeze for putting effort into tweaking its contents. Thanks to them, these release notes are in great shape. A new way to view your home The Home Dashboard is now Overview as it becomes the official default standard, replacing the old “Overview” for all new instances. If you’re a long-time user who never customized your default view, we’ll suggest the switch to you; otherwise, you can find it in Settings > Dashboards to try it out whenever you’re ready. Liked the old Overview as a way to build your custom dashboards? You can still do it. Go to Settings > Dashboards, select Create, and pick the Overview (legacy) template. Discovered devices at a glance Check out the new card in the For You section! It instantly displays any new devices your Home Assistant has discovered, allowing you to add them on the spot or jump straight to device management without digging through menus. Area assignments made easy In the last release, we added a dedicated Devices area within the Home Dashboard to catch everything currently unassigned. Now this section provides quick prompts to help you categorize your devices into the right rooms, keeping your setup organized with minimal effort. Faster area edits Need to swap the area temperature sensor? Area pages now feature a shortcut in the Edit button. This lets you jump straight to the area’s configuration to update primary sensors like humidity or temperature in seconds. We’ve also tidied up the interface by removing awkward empty spaces and fixing issues with some back arrows. Navigating through your sub-menus should now feel as smooth and predictable as you’d expect. UX and visual upgrades Modern look in the default theme: We’ve retired the old blue top bar in favor of a clean, consistent theme that matches our Settings page. This distraction-free design lets your cards and data take center stage. Personalized themes per user: Themes have moved! You can now find and toggle your favorite looks directly within your User profile, making it easier to set up a theme that works for you in any device you are logged in. Device database: We need your help! Finding reliable information about smart home devices before you buy them can be challenging. That’s why we’re building the Open Home Foundation device database: a community-powered resource that helps you make informed decisions based on real-world data. We’ve been working with early contributors to lay the groundwork, and the results are already impressive: over 10,000 unique devices across more than 260 integrations have been submitted by Home Assistant users who opted in to share their anonymized data. Help us out and share your devices Since we’re still in the early stages, the device database lives in Home Assistant Labs, where you can opt in to share anonymized information about the devices in your home. We have also added a new section called Device analytics to Home Assistant Analytics, which shows up when you enable it in Home Assistant Labs. If you opt in, you are, of course, able to opt out at any time. Privacy is our foundation. We collect zero personal data, period. Only aggregated, anonymized device information is shared if someone chooses to opt in, providing valuable insights while keeping your privacy intact. You can preview what is being sent using the Preview device analytics option available in the top-right corner on the Analytics page. Read our Data Use Statement for complete details. See the data in action We’ve launched an initial public dashboard where you can explore aggregated statistics as it grows. This is just our first step. We want to build what comes next together with you. Join us in building something meaningful Head to Settings > System > Labs to enable device analytics and start contributing your real-world anonymized device data to help others make better choices. Read our blog post for more details and join the conversation in our Discord project channel; we’d love to hear your ideas, feedback, and questions as we shape this resource together. Add-ons are now called Apps Starting with this release, add-ons are now called apps! You might be wondering: why change the name? The answer comes down to making Home Assistant more approachable for everyone, especially newcomers. When you first open Home Assistant, you see two sections that sound very similar: “Add-ons” and “Integrations.” Both names imply something you add to extend Home Assistant, but they serve fundamentally different purposes. For those of us who’ve been in the ecosystem for a while, this distinction is second nature. But we keep seeing new users getting confused, attempting to install add-ons when they need integrations, or vice versa. This is where the rename helps: use terminology that people already understand. Most people know what an “app” is. You open your phone’s app store, you pick an app, you install it. Your TV has an app store. Your NAS has apps. Heck, even some fridges have apps these days. It’s a concept everyone understands. The same mental model now applies to Home Assistant: Apps are standalone applications that run alongside Home Assistant. Integrations are connections that connect Home Assistant to your devices and services. Apps are separate software managed by your Home Assistant Operating System, running next to Home Assistant itself. They can be things like code editors, media servers, MQTT brokers, or database tools. Some apps even pair with integrations: for example, the Mosquitto MQTT broker app provides the service, while the MQTT integration connects Home Assistant to it. Existing documentation, community posts, and tutorials will continue to reference “add-ons” for some time. Search engines and AI assistants will also need time to catch up.

107. EN P1 troubles

Source: Domoticz (Forum News) For years domoticz is stable running. Recently two ratio solar chargers for EV's have been installed. Homewizzard P1 splitter (external supplied too) provides data for EV chargers and Domoticz. P1 now fails Both for domoticz and EV's. Chatgpt says this is because of how domoticz uses P1. Any idea to solve this problem? Statistics: Posted by BartSr — Tuesday 03 February 2026 16:55 — Replies 19 — Views 547

Source: Domoticz (Forum News) Please make off-peak hours available in Domoticz Settings so that plugins and scripts can use the values Some people may have up to 3 off-peak hours per day, in my country Statistics: Posted by lemassykoi — Tuesday 03 February 2026 0:28 — Replies 23 — Views 636

109. EN v2.0.5

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.5: Essential Framework Upgrade and Performance Boost This release focuses on the critical upgrade to the .NET 10 framework, ensuring long-term stability and delivering significant performance improvements. Full Changelog: v2.0.2...v2.0.5

Source: Domoticz (Forum News) Hello, Has anyone managed to get Enphase Envoy solar panels working in Domoticz 2025.2 with Trixie? It used to work with the Enphase native plugin, but it stopped working. The Enphase version is V8. Statistics: Posted by Fredom — Monday 02 February 2026 16:34 — Replies 2 — Views 199

Source: Domoticz (Forum News) Now more and more manufactures going over to matter, should it have own section like zigbee and zwave ? Statistics: Posted by Varazir — Sunday 01 February 2026 20:54 — Replies 3 — Views 304

Source: Domoticz (Forum News) Hello everyone, I am unable to update Domoticz. When I try to do so via the web page, I get an error message and it corrupts Domoticz. When I try to do so via the SSH terminal, I get no error message, but Domoticz is also corrupted. Here is the information for the functional Domoticz: Version: 2025.1 Build Hash: 89d5c900d Compile Date: 2025-05-05 09:02:49 dzVents Version: 3.1.8 Python Version: 3.9.2 (default, Mar 20 2025, 22:21:41) [GCC 10.2.1 20210110] Another issue is that I have to switch to “sudo su” to be able to extract the backup. Can you help me? Translated with DeepL.com (free version) Statistics: Posted by MicMac7351 — Saturday 31 January 2026 18:09 — Replies 1 — Views 177

113. EN v2.0.4

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.4 - Visual Program Editor Fix This release addresses a bug in Visual Program Editor. Changelog: Fixed: VPE parameter value code generation (by @genemars in genielabs/HomeGenie#488) Full Changelog: v2.0.3...v2.0.4

114. EN v2.0.3

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.3 - Dashboard Fix This maintenance release addresses a UI issue introduced in v2.0.2. Changelog: Fixed: Resolved an issue where the "Add Widget" button was unresponsive on default dashboard layouts. Users can now customize the default dashboards as expected. Full Changelog: v2.0.2...v2.0.3

115. EN v2.0.0-rc.14

Source: HomeGenie (GitHub Releases) HomeGenie Server v2.0.0-rc.14 - The "Agentic Automations" Update This release candidate bridges the gap between conversation and execution, bringing Agentic AI directly into the HomeGenie Scheduler. It enables the system to not only respond to your queries but to reason and act autonomously through recurring, natural language-driven tasks. Lailama: Enhanced Reasoning & Memory Management The Lailama Local AI engine receives a major architectural update focused on long-term stability and granular control. Intelligent Memory Pruning: Introduced Max Turns configuration to prevent context bloat. The engine now automatically resets the history after a defined number of turns, ensuring 100% stable performance and preventing the "looping" issues common in long-session LLMs. Customizable Context Window: Users can now manually adjust the Context Size (n_ctx) via a new slider (up to 8192 tokens), allowing for massive device inventories and complex system reports. Privacy-First Context Toggle: A new "Include device list" option allows users to decide exactly when to share the home’s status with the AI, optimizing token usage and privacy. Prompt.Schedule API: A new synchronous API method designed for background tasks, allowing the AI to execute logic without "polluting" the primary chat history. Scheduler 2.0: The "Genie Command" The Scheduler is no longer just a timer; it’s now an Agentic Host. Genie Command Preset: Introducing a new preset action that lets you define automations in plain language. Instead of writing code, you can now schedule tasks like: "Check the porch light and set it to a random color different from the last one.", "Set the heat to 22° and start the Sunrise scene" Enhanced Scripting Host: For power users, the Scheduler now supports the $$.api.call (ApiHelper) method, enabling deep integration between custom scripts and the internal API bus. Security & Core Refactor Security System 2.0: The Security Alarm System has been refactored into its own dedicated domain: HomeAutomation.SecuritySystem/Main. BREAKING CHANGE: Any manual scripts or widgets referencing the old security module address must be updated to the new fully qualified path. Energy Monitor: Fixed a minor logic bug in the energy tracking program to improve aggregation accuracy during peak load shifts. UI & UX Refinements Streamlined Rendering: Fixed a rendering delay in Voice Control messages, ensuring that transcriptions appear instantly as they are processed. Consistent API Blocks: Improved the rendering of subsequent API blocks in the chat interface, ensuring that complex multi-step AI responses are displayed clearly and remain fully interactive. Responsive Resize: Further refinements to the chat's drag-to-resize bar for better handling on touch devices and high-DPI screens. IMPORTANT NOTE This RC is the final stepping stone towards the stable v2.0. We recommend all users on rc.13 to upgrade to test the new Lailama memory management features. Building the future of local, private, and agentic automation. Enjoy HomeGenie 2.0! Full Changelog: v2.0.0-rc.12...v2.0.0-rc.14

116. EN v2.0.0-rc.13

Source: HomeGenie (GitHub Releases) HomeGenie Server v2.0.0-rc.13 - The "Agentic Home" Update This release candidate transforms HomeGenie into a truly Agentic system, introducing a suite of specialized services that allow Artificial Intelligence to "feel" and interact with your home context in a native, professional manner. New Services & Major Features Context-Aware AI Architecture Context Engine & Real-time Briefing: A new system service that provides LLMs (Gemini and Lailama) with a live briefing before every interaction. The AI is now fully aware of weather conditions, energy consumption, and security status. Universal Fluent API Generator: A powerful new engine that generates ready-to-use code in C#, JavaScript, and Python using a unified, human-readable syntax (e.g., light.On(), thermostat.Level = 21.5). Centralized Chat History Service: A unified system utility for managing persistent conversation logs with multi-store support. It ensures a seamless context when switching between different AI providers (Local vs. Cloud). Pro Developer & Power User Tools File Manager Service: A new secure, API-driven service for reading and writing files within the program's data folder, featuring protection against path traversal. 'Editor' Option Field: Introduced a new option type that allows developers to embed full text editors (YAML, JSON, Markdown) directly into module features and program settings. Refined Code Editor: The integrated file editor now features a Light Theme, modal-driven constraints to prevent accidental closing, and improved mobile responsiveness. Widget Utility Expansion: Added this.apiCall(...).subscribe(...) to the base interface for custom ZUIX.js widgets, enabling easier system integration for third-party modules. UI & Performance Overhaul Resizable System Chat: The side panel for the system chat is now resizable via a smooth drag-to-resize bar, fully synchronized with global CSS variables. Zero-Lag Interface: Significant performance boost by moving custom ZUIX.js widget execution outside of NgZone and implementing a pre-calculated ViewState mapping to reduce Change Detection overhead. Interactive API Rendering: AI responses now render executable API blocks with dedicated buttons to copy the API call string, the full URL, or to interact directly with the identified modules. Smart Streaming UI: Live token streaming with optimized scroll management and unified "Stop" commands to interrupt AI generation. Improvements & Stability Energy Monitor 2.0: Improved RF error filtering for Watt readings and implemented binary persistence for reliable historical data logging. Gemini Knowledge Base: Updated system prompts for Gemini Widget Genie with the latest HomeGenie Suite 2.0 specifications and token verification logic. Global i18n: Manual localization updated for Italian and English; automated support expanded to over 90 languages. Bug Fixes Fixed: Chat history not updating when switching to "Base Control Chat." Fixed: Rendering issues for API commands related to system Programs/Scenarios. Fixed: API block rendering now correctly appears even during the AI "Thinking" phase. Fixed: Program "Settings" not saving correctly from certain widget menus. Fixed: Main title bar visibility issues when navigating from deep links. Building the future of local, private automation. Enjoy HomeGenie v2.0! Full Changelog: v2.0.0-rc.12...v2.0.0-rc.13

Source: Domotique News A l'occasion du CES de Las Vegas ( 6-9 janvier ) le fabricant Coréen Samsung s'apprete à frapper fort dans le domaine de la maison intelligente .

Source: Domotique News Depuis plus de dix ans , la maison. numérique connectée utilisait un Irobot avec satisfaction. Solidité, facilité d’entretien, L’iRobot était à nos yeux l’un des plus sérieux aspirateurs robots. Nous utiisons presque quotidiennement un ROOMBA. Evidemment plus lourd, plus encombrant ,plus bruyant mais tellement plus fiable ! La société iRobot, née au Massachusetts à la […]

119. EN v2.0.0-rc.12

Source: HomeGenie (GitHub Releases) HomeGenie Server v2.0.0-rc.12 This release candidate introduces a massive overhaul to the Lailama Local AI engine and debuts a new, robust Async Download Manager. Lailama (Local AI) Overhaul The local intelligence module has been rewritten to be more stable, efficient, and user-friendly. Next-Gen Model Support: The model registry (models.yaml) now defaults to late-2025 SOTA models, including: Qwen 3 & Gemma 3 Llama 3.2 GPT-5.2 & DeepSeek 3.2 Distills Download Management: Fully manual workflow implemented. You can now Start, Pause, and Resume massive GGUF model downloads directly from the settings UI. Smart Stream Buffering: The engine now surgically removes generation artifacts (e.g., <|end|>, /avatar) and hallucinations (e.g., User:), ensuring a clean chat experience. Performance & Stability: Fixed memory leaks via robust resource disposal and safe model warmup. Hardened System Prompt to suppress internal monologue. Widget & UI Improvements Optimized Rendering: Implemented 50ms throttling for the chat widget, drastically reducing CPU usage during generation. UX Enhancements: Added Smart Auto-Scroll (won't disturb you if you are reading history), a CSS-based terminal cursor, and a new "Landing UI" for pending downloads. Core Infrastructure Async Download Manager: A new core service handling HTTP(S) file transfers with full support for Range headers (pause/resume) and temporary .part file handling. Bug Fixes & Misc Fixed a resource leak in Widget Editor when updating component views. Improved navigability in the HomeFlix (Media Server) UI. Full Changelog: v2.0.0-rc.11...v2.0.0-rc.12

Source: Domotique News Le fournisseur de semi-conducteurs japonais Renesas Electronics renforce sa famille de microcontrôleur basse consommation RA avec des circuits de connectivité pour l’IoT et la domotique.


Windows

Source: BleepingComputer Windows Microsoft is rolling out new Windows 11 Insider Preview builds that improve security and performance during batch file or CMD script execution. [...]

Source: Windows Latest Microsoft says 2026 is the moment for AI computers, and it has listed some of the features you should look for.

Source: Neowin Word, Excel, Teams, PowerPoint, Outlook, and OneNote! Get all these essential Microsoft apps for your Windows PC for a low one-time cost. ...

Source: GNT Une nouvelle campagne de cyberattaque cible les développeurs Next.js via de faux projets de recrutement. Microsoft alerte sur ces dépôts de code piégés qui, sous couvert de tests techniques, installent des logiciels malveillants pour exfiltrer des données sensibles et prendre le contrôle des machines compromises.

Source: BleepingComputer Windows The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has released new details about RESURGE, a malicious implant used in zero-day attacks exploiting CVE-2025-0282 to breach Ivanti Connect Secure devices. [...]

Source: Next INpact ReFS, pour Resilient File System, est le système de fichiers propriétaire conçu par Microsoft à destination des très grands espaces de stockage, avec une capacité maximale fixée à 35 Po par volume, contre 256 To en standard sur NTFS. Introduit sous Windows (8.1) et Windows Server (2012), il y a près de 14 ans, ReFS […]

Source: Neowin Microsoft has officially introduced boot support for the Resilient File System (ReFS) in the latest Windows Server Insider Preview builds. ...

Source: Neowin Microsoft is expanding the lineup of hardware designed to access Windows 365, its service that lets you rent a Windows PC in the cloud. ...

Source: BleepingComputer Windows Microsoft now allows more enterprise users to restore their personal settings and Microsoft Store apps from a previous Windows 11 device. [...]

Source: BleepingComputer Windows American manufacturer of medical devices, UFP Technologies, has disclosed that a cybersecurity incident has compromised its IT systems and data. [...]

Source: Windows Latest Microsoft Edge Organize Tabs uses AI to auto-group tabs by topic and color. Here’s a hands-on test of how well it organizes 40+ messy tabs.

Source: Windows Latest Microsoft promoted Edge Secure Network VPN as a free, built-in way to browse more securely, but a privacy researcher argues the feature only protects traffic inside the Edge browser.

Source: Windows Latest Microsoft lists Copilot as the number one productivity app for Windows 11, well above File Explorer and Snipping Tool.

Source: Windows Latest Microsoft is experimenting with a new Windows 11 taskbar setting that allows users to share any open app window directly with Copilot or other approved AI assistants. The feature builds on earlier “Share with Copilot” testing and shows how the taskbar is evolving into a dynamic hub for AI.

Source: BleepingComputer Windows North Korean hackers are deploying newly uncovered tools to move data between internet-connected and air-gapped systems, spread via removable drives, and conduct covert surveillance. [...]

Source: GNT La Connectivity Standards Alliance lance Aliro 1.0, un protocole unifié pour les clés numériques. Soutenu par Apple, Google et Samsung, il vise à standardiser l'ouverture des serrures connectées via smartphone ou montre, en utilisant NFC, Bluetooth et UWB. L'objectif est de mettre fin à la fragmentation du marché et de simplifier l'accès aux domiciles, bureaux et hôtels.

Source: Neowin Windows 11 build 28020.1673 is here for Canary users, offering them new emojis, a built-in network speed test (sort of), better dark mode, and more. ...

Source: BleepingComputer Windows A yearlong Europol-coordinated operation dubbed "Project Compass" has led to 30 arrests and 179 suspects being tied to "The Com," an online cybercrime collective that targets children and teenagers. [...]

Source: GNT Impossible d'échapper à la mode de l'IA agentique. Microsoft ne fait pas exception en dévoilant Copilot Tasks pour l'évolution et la transformation de son assistant IA.

Source: Neowin As part of one of the largest private funding rounds in history, OpenAI secures a massive $50 billion infrastructure deal with Amazon, while maintaining its cooperation with Microsoft. ...

Source: Neowin The new AWS partnership signals a multi-cloud future for OpenAI, balancing its legacy ties with Microsoft including the revenue sharing agreement. ...

Source: Next INpact La bourse attendra : OpenAI a dévoilé vendredi les modalités d’un nouveau tour de table record, avec 110 milliards de dollars réunis, sur la base d’une valorisation à 730 milliards de dollars. L’entreprise dirigée par Sam Altman en profite pour annoncer un partenariat renforcé avec Amazon, et assure que son accord avec Microsoft n’est pas […]

Source: BleepingComputer Windows Everyday tools like PDF readers, email clients, and archive utilities quietly define the real attack surface. Action1 explains how third-party software drift increases exploit risk and why consistent patching reduces exposure across endpoints. [...]

Source: BleepingComputer Windows A Ukrainian man has pleaded guilty to operating OnlyFake, an AI-powered website that generated and sold more than 10,000 photos of fake identification documents to customers worldwide. [...]

Source: Neowin Android 17 Beta 2 adds EyeDropper APIs, a 3-hour OTP security delay, and system-wide chat bubbles. Test SDK 37 before the June 2026 stable release. ...

Source: Next INpact Alors que des contestations émergent autour de plusieurs projets de centres de données en France et ailleurs, des associations tentent de se saisir de ces occasions de visibilisation de l’infrastructure numérique pour ouvrir un débat sur la trajectoire technologique. « Méga datacenter, incinérateur, c’est non ! » À Vitry-sur-Seine, dans le Val-de-Marne, le remplacement d’un dépôt pétrolier […]

Source: Next INpact Cegedim Santé a admis jeudi soir avoir été victime d’une intrusion réalisée au travers de son logiciel de santé MonLogicielMedical.com. Révélée par le 20 heures de France 2, la fuite de données pourrait toucher entre 11 et 15 millions de Français. Elle ne concernerait cependant que le dossier administratif des patients, et non leurs dossiers […]

Source: Neowin Microsoft is entering the AI agent space with Copilot Tasks, a tool designed to execute background actions like web browsing and app coordination to get things done. ...

Source: BleepingComputer Windows Google API keys for services like Maps embedded in accessible client-side code could be used to authenticate to the Gemini AI assistant and access private data. [...]

Source: Neowin Teams keeps piling on the upgrades, and February 2026 brings a long list of new features across chat, meetings, and more. ...

Source: Neowin Amazon sellers are quietly inflating Windows laptop storage by bundling OneDrive, and many buyers are missing the fine print. ...

Source: BleepingComputer Windows Trend Micro has patched two critical Apex One vulnerabilities that allow attackers to gain remote code execution (RCE) on vulnerable Windows systems. [...]

Source: BleepingComputer Windows DIY store chain ManoMano is notifying customers of a data breach personal data, which was caused by hackers compromising a third-party service provider. [...]

Source: Neowin Discover how to navigate the intersection of tech, cybersecurity, and commerce. ...

Source: BleepingComputer Windows A critical vulnerability in the Junos OS Evolved network operating system running on PTX Series routers from Juniper Networks could allow an unauthenticated attacker to execute code remotely with root privileges. [...]

Source: BleepingComputer Windows French professional football club Olympique de Marseille has confirmed a cyberattack after a threat actor claimed on Monday that it breached the club's systems earlier this month. [...]

Source: Next INpact L’Autorité de régulation de la communication audiovisuelle et numérique (Arcom) a annoncé jeudi avoir prononcé une nouvelle série de mesures de blocage et de déréférencement portant sur 35 sites de médias russes soumis à des sanctions européennes. Adressées aux fournisseurs d’accès à Internet, aux moteurs de recherche et aux fournisseurs de services DNS, ces mesures […]

Source: Windows Latest Exclusive: Lenovo Legion Go Fold is a handheld with foldable display, doubles as a PC

Source: BleepingComputer Windows The number of ransomware victims paying threat actors has dropped to 28% last year, an all-time low, despite a significant increase in the number of claimed attacks. [...]

Source: Neowin A new update for VMware Workstation is now available, fixing the long-standing bug with updates and plenty of other issues. ...

Source: Next INpact Pour Marco Rubio, le RGPD impose « des restrictions inutiles et contraignantes en matière de traitement des données et des exigences en matière de flux transfrontaliers de données » qui pourraient nuire aux intérêts des entreprises technologiques états-uniennes. L’administration Trump vient d’ordonner aux diplomates états-uniens de faire pression contre les initiatives encourageant la souveraineté et […]

Source: BleepingComputer Windows New York Attorney General Letitia James sued video game developer and publisher Valve Corporation for using game loot boxes to facilitate illegal gambling activities among children and teenagers. [...]

Source: Windows Latest Windows 11 KB5077241 is now rolling out with new features, such as Emoji 16, which means you'll get a handful of new emojis.

Source: Next INpact Anthropic a coup sur coup procédé à deux annonces qui ont fait l’effet de petits séismes sur les marchés financiers. Plusieurs grands noms de la cybersécurité ont ainsi vu leur cours de bourse chamboulé suite à l’annonce, vendredi, de Claude Code Security. Lundi, c’est le géant IBM qui a connu la plus mauvaise journée boursière […]

Source: Next INpact En juin dernier, Microsoft annonçait la disponibilité en Europe d’un nouveau programme qui faisait la promesse de données restant au sein des frontières de l’Union et donnait des assurances quant au contrôle assuré par un personnel européen. Huit mois plus tard, l’éditeur étatsunien ajoute quelques nouveautés à son offre. Ainsi, Microsoft met en avant la […]

Source: Next INpact Réseaux sociaux, IA, jeux vidéo… alors que le gouvernement français s’affiche à la pointe de la régulation des usages du numérique par les enfants, le commissaire aux droits de l’homme du Conseil de l’Europe demande aux législateurs européens de diriger les réglementations sur les obligations des plateformes plutôt que sur les mineurs. « Alors que […]

Source: Next INpact Plusieurs analystes financiers, dont ceux issus de Goldman Sachs, Morgan Stanley ou JP Morgan Chase remettent en cause la croissance que l’industrie de l’intelligence artificielle apporterait à l’économie américaine. En parallèle, les alertes sur la manière de comptabiliser le poids des infrastructures dans les investissements des géants numériques se multiplient, y compris de la part […]

Source: Next INpact De la limitation dans les applications elles-mêmes aux solutions physiques de blocage des réseaux sociaux, Next vous propose un tour d’horizon de différents outils destinés à faciliter la maîtrise de son temps d’usage des réseaux sociaux… et d’autres usages numériques. reprendre la main sur ses réseaux sociaux ? lutter contre la tentation du […]

Source: Windows Latest WhatsApp may soon let you continue Android conversations (and possibly calls) on Windows 11 via “Resume." This feature showed up on some PCs.

Source: Windows Latest Windows 11 has faced loud criticism, especially after a turbulent 2025, but the narrative ignores historical context. Every Windows version has gone through similar update cycles, bug waves, and trust rebuilds. We examine how scale, visibility, and rapid serv

Sponsor this project

Packages

 
 
 

Contributors

Languages