Tencent HY-WU Dynamic LoRA | Advanced Image Editing Solution

Tencent HY-WU: Dynamic LoRA for Image Editing

Tencent has introduced the first part of its research series called HY-WU (Weight Unleashing). The core idea of the method is to abandon the traditional approach to model adaptation, where a fixed set of weights is used for all tasks. Instead, a special generator model is created, which during inference produces unique LoRA matrices for each input example without requiring prior fine-tuning or additional optimization.

This scheme addresses one of the common problems in fine-tuning — conflicts between tasks. For example, when needing to simultaneously “blur” an image and remove the blur, or “age” a face and restore it — a standard adapter is forced to find a compromise. As a result, gradients from conflicting tasks are directed in opposite directions, degrading the overall outcome. Research shows that the cosine similarity of gradients from such heterogeneous tasks is often negative on average around -0.30 — meaning tasks literally pull the model in different directions.

HY-WU proposes conditional parameter generation. The generator model with 8 billion parameters takes as input a general representative signal of the image and text query via the SigLIP2 encoder, then creates LoRA matrices of about 0.72 billion elements, which are then integrated into the base model. Training occurs directly through downstream tasks, without using pre-trained checkpoints or additional adapters.

In tests, image editing based on text prompts was used as the task, where conflicts of interest are obvious and visually noticeable. Human evaluation (GSB) results show that HY-WU significantly outperforms most popular open-source editors — its success rate among experts reaches 67–78%, trailing only Nano Banana 2 and Nano Banana Pro among closed solutions. Internal testing confirmed that quality improvement is achieved precisely through conditional routing — when the generator works with specific conditions; attempting to combine different conditions or average them results in reverting to the base model or performing worse.

Interestingly, increasing the number of trainable parameters does not necessarily lead to a significant quality boost compared to using a fixed point in weight space, as in standard LoRA approaches. Fully configured training with a large number of parameters shows similar results to simple Shared LoRA.

This work is the first part of a series exploring how to utilize functional memory in generative models. Future plans include comparing this approach with retrieval methods and identifying cases where they are most effective; developing online training protocols so models can learn new tasks without losing existing skills; and scaling the generator separately from the main model.

Additionally, plans include extending the method to other interface layers, applying it to video and agent-based systems, and exploring the possibility of targeted removal of undesirable behaviors through generator state management.

Tencent has also released a bundle — a generator model and the base model HY-Image-3.0-Instruct, which can be used for experimentation. Working requires approximately 8×40 GB or 4×80 GB VRAM.

The license for use is Tencent Hunyuan Community License.

For more detailed information, there are project pages, request instructions (in Chinese), the model itself, a technical report, and a GitHub repository.

This research opens new horizons for flexible image editing and demonstrates the potential of conditional generators to enhance neural network system performance.

Created with n8n:
https://cutt.ly/n8n

Created with syllaby:
https://cutt.ly/syllaby

Page view 19.03 07:52 Page view /ai-blog/beginner-materials-guides-learn-the-basics-effortlessly/ 19.03 07:50 Page view 19.03 07:46 Page view 19.03 07:45 Page view /ai-blog/bitgn-expands-engineering-team-explore-sandbox-features-today 19.03 07:43 Page view 19.03 07:41 Page view /ai-blog/ai-and-future-of-work-impact-of-artificial-intelligence 19.03 07:40 Page view /ai-blog/ai-learning-pitfalls-spot-flaws-in-ai-generated-images/ 19.03 07:38 Page view /ai-blog/international-womens-day-wishes-celebrate-bright-achievements 19.03 07:37 Page view /ai-blog/international-womens-day-celebrate-femininity-joy-our-brand 19.03 07:36