AI Landscape Shifts: OpenAI's Government Push, Meta's Superintelligence Vision, and Regulatory Waves

OpenAI is democratizing AI for government use, offering tools to enhance efficiency and citizen services across sectors like NASA and the Department of Defense. By validating AI's maturity, this initiative sets a blueprint for public sector integration, offering improved services. To capitalize, government agencies should start pilot programs to explore AI applications and gain a strategic advantage.
OpenAI for Government: Democratizing AI for Public Services
Imagine a world where government services are not only efficient but also incredibly responsive, powered by the same cutting-edge technology that's reshaping industries worldwide. That vision is rapidly becoming a reality as AI tools become more accessible and integrated into the public sector.
OpenAI's Foray into Government Services
OpenAI, the company behind groundbreaking models like ChatGPT, an AI chatbot capable of generating human-quality text, has officially launched a program aimed at providing U.S. government agencies with access to its suite of AI tools. This initiative marks a significant step in democratizing AI, making it available for public services and potentially transforming how government operates.
This move by OpenAI signals a pivotal shift in the AI landscape, recognizing the immense potential for AI to enhance governmental efficiency and citizen services.
Expanding Partnerships: NASA, National Labs, and Treasury
This isn't just a one-off offering; OpenAI is strategically expanding its partnerships to include key government entities. Collaborations with NASA, national laboratories, and even the Treasury Department are already underway. These partnerships signify a broad recognition of AI's potential across diverse sectors, from space exploration to financial management. For instance, NASA could leverage AI to analyze vast datasets from space missions, while the Treasury could use it to detect financial fraud more effectively.
Here are some potential applications across these sectors:
NASA: Analyzing satellite imagery, optimizing mission planning, and predicting equipment failures.
National Labs: Accelerating scientific discovery through AI-driven simulations and data analysis.
Treasury: Enhancing fraud detection, improving tax compliance, and optimizing economic forecasting.
Pilot Program with the Department of Defense
Perhaps one of the most notable developments is the pilot program between OpenAI and the Department of Defense (DoD), valued at a staggering $200 million. This program will explore how AI can be used to improve various aspects of national defense, from cybersecurity to logistics. While the specific applications remain under wraps, it's clear that the DoD sees immense value in leveraging AI to maintain a strategic advantage. This also provides an example for governmental agencies on how to implement similar AI programs in their respective departments.
Validating AI Maturity: A Blueprint for Public Sector Integration
The government's increasing adoption of AI tools serves as a powerful validation of the technology's maturity and reliability. No longer a futuristic concept, AI is now viewed as a practical solution to real-world problems. This validation sets a blueprint for other public sector organizations looking to integrate AI into their operations. By demonstrating the benefits of AI, OpenAI and other AI providers are paving the way for widespread adoption across government agencies at the local, state, and federal levels.
This move can be interpreted as a signal of trust in the reliability and safety of AI, encouraging other agencies to explore AI in government. These government AI programs can potentially lead to better government AI solutions.
As government agencies increasingly embrace AI, it's crucial to consider the ethical implications and ensure responsible implementation. This includes addressing concerns about bias, privacy, and transparency. However, the potential benefits of AI in the public sector are undeniable, and OpenAI's initiative represents a significant step toward realizing that potential.

Google DeepMind's AI Breakthroughs: From Self-Evolving Agents to Automated ML
Google DeepMind continues to push the boundaries of what's possible with AI, and some of their recent advancements are truly game-changing. From self-evolving agents to automated machine learning, their breakthroughs promise to reshape industries and accelerate scientific discovery.
Self-Evolving AI: Google DeepMind's TTD DR
Imagine an AI that can not only learn but also evolve its own learning strategies. That's the promise of Google DeepMind's TTD DR (Team Training with Data, Reward, and self-play), a self-evolving multi-agent system.
TTD DR isn't just another algorithm; it's a framework for creating adaptable and intelligent AI agents.
TTD DR allows multiple AI agents to interact, learn from each other, and refine their strategies over time without human intervention. This approach mirrors natural evolution, where the fittest strategies survive and propagate.
Key Benefits:
Adaptability: Agents can adapt to changing environments and new challenges.
Scalability: The system can scale to handle complex problems with many interacting agents.
Efficiency: Self-play allows agents to learn efficiently without relying on vast amounts of labeled data.
DeepMind's research opens the door to AI systems that can autonomously solve complex real-world problems, from optimizing traffic flow to managing energy grids.
MLE-STAR: Automating Machine Learning Pipeline Creation
Creating effective machine learning pipelines often requires significant expertise and time. Google DeepMind's MLE-STAR aims to democratize this process by automatically building ML pipelines for Kaggle competitions. Think of it as an AI that can design other AIs. MLE-STAR leverages a combination of techniques to explore and optimize different pipeline configurations, significantly reducing the need for manual tuning.
How it Works:
Automated Feature Engineering: Automatically identifies and extracts relevant features from raw data.
Hyperparameter Optimization: Tunes the parameters of machine learning models to maximize performance.
Pipeline Selection: Selects the best-performing pipeline from a diverse set of candidates.
By automating the pipeline creation process, MLE-STAR can enable non-experts to leverage the power of machine learning and accelerate the development of AI solutions.
AEF Model: Satellite-Free Earth Mapping
Access to accurate and up-to-date Earth mapping data is crucial for various applications, from disaster response to urban planning. Traditionally, this data has relied on satellite imagery, which can be costly and limited by cloud cover. Google DeepMind's AEF (Advection-diffusion Equation Fusion) model offers a groundbreaking alternative: satellite-free Earth mapping using fused global data.
AEF leverages readily available data sources, such as weather forecasts and ground-based measurements, to create detailed and accurate maps of land surface variables.
This approach not only reduces reliance on expensive satellite imagery but also provides more frequent and comprehensive coverage. The implications for environmental monitoring and resource management are immense. Imagine using Google Cloud AI with AEF to predict drought conditions or optimize agricultural yields – the possibilities are endless.
Multi-Agent AI: A Catalyst for Innovation
These individual breakthroughs collectively demonstrate the power of multi-agent AI to accelerate progress across diverse domains. Whether it's research, coding, or geospatial intelligence, the ability to create AI systems that can interact, learn, and evolve is transforming the way we solve complex problems. The development and refinement of advanced AI models often requires prompt engineering, and understanding the nuances of tools like ChatGPT is crucial for effective research. For example, AI can analyze vast datasets more quickly and accurately than humans, leading to new discoveries in medicine and materials science.
These advancements highlight Google DeepMind's commitment to pushing the boundaries of AI and its potential to address some of the world's most pressing challenges. As these technologies mature, we can expect to see even more transformative applications emerge, impacting everything from healthcare to environmental sustainability. Stay tuned to AI News for more updates on these exciting developments.

AI Data Wars: Anthropic Blocks OpenAI's Claude Access Over Misuse Allegations
The gloves are off in the AI arena, and the latest scuffle involves accusations of data appropriation that could reshape the rules of engagement.
Anthropic's API Lockdown: A Line in the Sand
In a bold move, Anthropic, the company behind the powerful Claude AI assistant, reportedly revoked OpenAI's access to its API. Think of it like cutting off a rival's water supply in a desert – a critical resource is suddenly denied. This isn't just a minor spat; it's a declaration that some lines can't be crossed in the pursuit of AI dominance.
The Alleged Misuse: Training on the Competition
At the heart of the issue lies the claim that OpenAI allegedly utilized outputs generated by Claude to train its own models. Imagine a chef secretly using a competitor's signature dish to refine their own recipe. The accusation suggests a violation of Anthropic's terms of service, which, like most AI providers, likely prohibit using API outputs to improve competing AI systems. This alleged act touches upon the increasingly vital discussion of AI intellectual property.
Setting Precedents: IP and Fair Play in the AI Era
This enforcement action carries significant weight, potentially setting precedents for how AI companies protect their intellectual property and define competitive boundaries.
Consider these implications:
IP Protection: If Anthropic's claims are substantiated, it reinforces the idea that AI-generated outputs are not free for all. It's akin to copyright protection for creative works, extending to the realm of AI. Also, consider exploring AI tools like Copyleaks or OriginalityAI that help in detection and avoidance of plagiarism.
Competitive Boundaries: The incident highlights the need for clear rules of engagement in the rapidly evolving AI competition. What constitutes fair use of AI-generated data? Where does inspiration end and infringement begin?
API Terms of Service: This situation underscores the importance of carefully reviewing and adhering to the terms of service of AI APIs. Ignoring these terms can have significant consequences, as OpenAI is now allegedly discovering.
The Broader Implications: Navigating the AI Data Wars
This situation is indicative of the growing "AI data wars." Access to high-quality training data is the lifeblood of AI models. As AI models become more sophisticated, the demand for data increases, and the competition to acquire and protect that data intensifies. We're seeing a shift toward proprietary datasets and stricter enforcement of data usage agreements.
The clash between Anthropic and OpenAI underscores the critical need for clarity around AI data usage and IP rights. It's a warning shot that could reshape the landscape of AI development and competition.
As the industry matures, we can expect to see more disputes over data ownership, usage rights, and the ethical implications of training AI models on information sourced from competitors. Staying informed on AI News is becoming essential to understand these evolving dynamics.
Meta's $110B Bet on 'Personal Superintelligence' to Revolutionize Wearable AI
Imagine a world where your smart glasses aren't just for taking photos, but are powered by an AI that anticipates your needs, learns your preferences, and becomes an indispensable companion – that's the future Meta is betting on.
Meta's $110 Billion Gamble on Personal Superintelligence
Meta has officially announced its ambitious Personal Superintelligence (PSI) initiative, backed by a staggering $110 billion investment in infrastructure. This isn't just about building bigger data centers; it's about creating a fundamentally new computing paradigm where AI is deeply integrated into our daily lives through wearable devices. Think of it as a massive upgrade to the Meta AI we already see, but with the power to truly understand and assist you in a personalized way. The goal? To move beyond general-purpose AI and create AI that's intimately tailored to the individual.
Embedding AI Companions into Smart Glasses and Wearables
The core of Meta's vision lies in embedding AI companions into smart glasses and other AI wearables. Imagine wearing glasses that can not only translate languages in real-time but also offer contextual advice, remind you of important tasks, and even help you navigate unfamiliar environments. Meta envisions these AI companions as proactive assistants, constantly learning and adapting to your needs, making your life more efficient, enjoyable, and connected. This push could redefine how we interact with technology, moving from a screen-based world to one where AI is seamlessly woven into our physical reality. Think of it as a digital sixth sense, augmenting your capabilities and providing support when you need it most.
Recruiting Top AI Talent for the PSI Vision
To realize this ambitious vision, Meta is aggressively recruiting top AI talent. The company has reportedly poached key researchers and engineers from leading AI organizations like OpenAI and Scale AI. This talent acquisition spree underscores Meta's commitment to assembling a world-class team capable of tackling the complex challenges of building truly intelligent and personalized AI systems. Just as a sports team needs star players, Meta needs the best AI minds to execute its ambitious plan. This strategic hiring further signals their dedication to dominating the next wave of AI innovation.
Redefining Human-Computer Interaction
The promise of consumer-centric superintelligence is that it could completely redefine human-computer interaction.
Instead of passively consuming information or interacting with apps, we could have a dynamic, two-way relationship with AI. This goes beyond simple voice commands; it's about AI that anticipates our needs, understands our intentions, and proactively offers assistance. Imagine an AI that not only reminds you to pick up milk but also suggests a recipe based on the ingredients you already have and adjusts the lighting in your home to create the perfect cooking ambiance. This level of personalization and proactivity has the potential to transform every aspect of our lives, from work and education to entertainment and social interaction. We're talking about a future where technology isn't just a tool but a true partner, working alongside us to achieve our goals and enhance our experiences.
Meta's ambitious investment in Personal Superintelligence signals a major shift in the AI landscape, potentially ushering in an era where AI is not just intelligent but also deeply personal and integrated into our daily lives. The success of this venture will depend on Meta's ability to attract top talent, overcome technical challenges, and address ethical concerns surrounding personalized AI. For the latest developments in the AI world, keep an eye on AI News.

DeepSeek's Efficiency Leap: Challenging AI Sanctions with Low-Cost GPT-4 Performance
The AI landscape is in constant flux, with new players and strategies emerging to challenge the established order. One of the most compelling recent developments is DeepSeek's efficiency leap, which directly challenges the status quo in AI development and raises important questions about the impact of AI sanctions.
DeepSeek R1: GPT-4 Performance, Fraction of the Cost
DeepSeek, a rising star in the AI world, has achieved a remarkable feat: developing its R1 model to match the performance of GPT-4 while drastically reducing training costs. Independent tests have confirmed that DeepSeek's R1 model achieves comparable results to GPT-4, but at a training cost of just $5.6 million. This is a game-changer, as it demonstrates that top-tier AI performance doesn't necessarily require exorbitant investment. In a world where compute power is a key bottleneck, DeepSeek's efficient design offers a potential path forward for democratizing AI development. Imagine needing only a fraction of the resources to achieve state-of-the-art results – that's the power of DeepSeek's approach.
DeepSeek's R1 model proves that innovation and efficiency can trump raw computational power in the race for AI supremacy.
Mixture-of-Experts: The Secret to Efficiency
So, how did DeepSeek achieve this impressive cost reduction? The answer lies in its clever use of the Mixture-of-Experts (MoE) architecture. MoE is a technique where a large model is divided into smaller, specialized "expert" models. During inference, only the most relevant experts are activated, which significantly reduces the computational load. This allows DeepSeek to train a powerful model even with limited hardware access. It's like having a team of specialists, each focusing on their area of expertise, rather than relying on one generalist who has to know everything. This targeted approach allows for faster training and more efficient resource utilization.
Challenging AI Sanctions Through Innovation
DeepSeek's efficiency-driven AI development carries significant implications, particularly in the context of AI sanctions. With limited access to advanced hardware due to US sanctions, companies are forced to become more resourceful and innovative. DeepSeek's success shows that these constraints can actually drive innovation, leading to more efficient AI models that can compete with those developed by well-resourced organizations. This ingenuity isn't just about cost savings; it's about resilience and adaptability in a rapidly evolving technological landscape. The ability to achieve comparable performance with fewer resources disrupts the existing power dynamics in AI development.
Reshaping Global AI Competitiveness
The implications of efficient AI training extend beyond individual companies or geopolitical tensions. DeepSeek's achievements contribute to reshaping global AI competitiveness. By demonstrating that high-performance AI can be developed with significantly lower investment, DeepSeek empowers other organizations to enter the field and compete on a more level playing field. This increased competition fosters further innovation and drives progress across the entire AI ecosystem. In essence, DeepSeek is not just building better AI; they are building a more accessible and equitable AI future.
DeepSeek is at the forefront of a movement towards efficient AI, potentially democratizing access to advanced AI capabilities worldwide.
DeepSeek's success acts as a catalyst, pushing the boundaries of what's possible with limited resources, ultimately accelerating the advancement and democratization of AI on a global scale. This shift has broader implications, including the need for updated AI regulation that accounts for the realities of efficient AI development. This new paradigm demands a shift in mindset – one that prioritizes ingenuity and resourcefulness alongside computational power.

The Open Chiplet Economy: Standardized Chiplets Promise Flexible, Low-Cost AI Hardware
Imagine building AI hardware like constructing with LEGOs – that's the promise of standardized chiplets, and it's closer than you think.
The Dawn of Standardized Chiplets
The Open Compute Project (OCP), a collaborative community focused on designing hardware to efficiently support the growing demands of AI and other advanced computing applications, recently released the Universal Chiplet Interconnect Express (UCIe) specification. This might sound like technical jargon, but its implications are profound. Think of UCIe as a universal language that allows different chiplets – small, specialized processing units – to communicate seamlessly, regardless of their manufacturer or architecture. This standardization paves the way for a truly open chiplet ecosystem.
Unlocking the Open Chiplet Economy
What does an "Open Chiplet Economy" actually mean? It signifies a move away from monolithic, proprietary AI chips designed and manufactured by a single vendor. Instead, it envisions a future where companies can mix and match chiplets from different sources, creating custom AI packages tailored to their specific needs. For instance, a company might combine a chiplet optimized for matrix multiplication (a core operation in deep learning) with another chiplet specializing in memory management, all on a single, integrated package. This disaggregation fosters innovation and competition, breaking the hold of single-vendor lock-in.
Lower Costs, Higher Flexibility, Peak Performance
The benefits of this approach are multifold:
Reduced Costs: By using smaller, specialized chiplets instead of massive, complex chips, manufacturers can reduce production costs. Chiplets also allow for yield optimization – if one part of a large chip is faulty, the entire chip is discarded. With chiplets, only the faulty unit needs replacing. Using tools such as RemoveBG is important for cleaning image datasets to train AI models effectively, optimizing resource utilization, and reducing computational costs, ultimately contributing to the development of more efficient AI hardware.
Increased Flexibility: Companies gain the freedom to design AI hardware architectures that perfectly match their workloads. This is particularly crucial in the rapidly evolving field of AI, where new algorithms and applications constantly emerge. The AI News section keeps you updated on the evolution of algorithms.
Enhanced Performance: By carefully selecting and integrating chiplets with specific capabilities, it's possible to achieve higher performance than with general-purpose processors. This is especially vital for demanding AI tasks like training large language models or processing vast amounts of data. Consider the performance boost from specialized AI chiplets when using demanding AI tools such as Runway, for video editing and content creation, which relies heavily on efficient processing.
The UCIe standard is more than just a technical specification; it's a catalyst for innovation in AI hardware.
In essence, the open chiplet economy promises to democratize access to high-performance AI hardware. This new era fosters competition and allows for unprecedented customization, which could lead to breakthroughs in AI capabilities across various industries.

AI Revolutionizes Healthcare: Superior Diagnostics and Precision Medicine on the Rise
AI is not just knocking on the door of healthcare; it's already re-writing the rules of diagnosis and treatment.
AI Transforming Diagnostics: From Promise to Reality
We're witnessing a profound shift as AI transitions from experimental applications to demonstrably superior diagnostic tools. Imagine a world where diseases are detected earlier and with greater accuracy, leading to more effective treatments and improved patient outcomes. This isn't science fiction; it's the direction AI in healthcare is rapidly heading.
AI's ability to analyze vast datasets and identify subtle patterns is proving invaluable in diagnostics, surpassing human capabilities in certain areas.
Consider Microsoft's MAI-DxO platform. This isn't just another AI tool; it's a diagnostic powerhouse achieving an impressive 85% diagnostic accuracy. That's a benchmark that has significant implications for the future of medical care. But what makes MAI-DxO so effective?
Chain-of-Debate Reasoning: MAI-DxO employs a sophisticated "chain-of-debate" multi-agent reasoning system. This means multiple AI agents analyze data independently, then debate and refine their conclusions, resulting in more robust and reliable diagnoses. It's like having a panel of expert physicians, each challenging and validating the others' findings.
Surpassing Physician Accuracy: The most compelling aspect is that MAI-DxO surpasses the average accuracy of human physicians in certain diagnostic tasks. While AI won't replace doctors, it will augment their abilities and help reduce diagnostic errors. It serves as a safety net, catching things that a human might miss. Think of it as AI assisting in tasks similar to how Grammarly enhances writing quality.
The Rise of Precision Medicine
Beyond diagnostics, AI is also revolutionizing precision medicine – tailoring treatments to individual patients based on their unique genetic makeup and disease characteristics.
One of the most promising applications of AI in precision medicine is its ability to predict tumor stemness, a critical factor in cancer treatment.
By analyzing complex genomic data, AI can identify patients who are more likely to respond to specific therapies, enabling oncologists to make more informed treatment decisions.
Furthermore, companies like Everlab are pioneering continuous biomarker monitoring, raising $10 million to develop AI-powered systems that track patients' health in real-time. This technology offers a glimpse into the future of personalized healthcare, where patients are continuously monitored and treatments are adjusted proactively based on their individual needs.
Biomarker Monitoring: A New Era of Personalized Healthcare
Continuous biomarker monitoring is poised to revolutionize how we manage chronic diseases. Imagine a world where patients wear sensors that continuously track key indicators, and AI algorithms analyze this data to detect subtle changes that might signal an impending health crisis.
This proactive approach could prevent hospitalizations, improve treatment outcomes, and ultimately enhance patients' quality of life.
These advancements are not just incremental improvements; they represent a paradigm shift in how we approach healthcare. AI News is constantly reporting on breakthroughs in AI-driven diagnostic tools, and the trend is clear: AI is becoming an indispensable tool for healthcare professionals. As AI continues to evolve, we can expect to see even more innovative applications that transform the way we diagnose, treat, and prevent disease.
AI Regulation Takes Hold: EU AI Act and US State Laws Shape the Future of AI Governance
The world of AI is rapidly evolving, and with it comes the need for clear and effective regulation. The landscape is shifting as governments worldwide grapple with the implications of this transformative technology.
EU AI Act: A New Era of AI Governance
On August 2nd, a significant milestone will be reached as the EU AI Act's obligations for general-purpose AI models go live. This landmark legislation sets a new standard for AI governance, mandating transparency and safety checks for AI systems operating within the European Union. Think of it as a rigorous quality control process, ensuring that AI models meet specific benchmarks before they can be widely deployed.
The EU AI Act aims to foster innovation while safeguarding fundamental rights and ethical principles.
The core tenets of the EU AI Act include:
Transparency Requirements: Developers must provide detailed documentation about their AI models, including data sources, algorithms, and potential biases. This is like providing a detailed ingredient list for a complex recipe, allowing users to understand what's going into the final product.
Safety Assessments: AI systems deemed high-risk must undergo thorough safety assessments to identify and mitigate potential harms. This involves rigorous testing and validation processes, similar to the safety checks performed on new vehicles before they are released to the public.
Compliance Framework: Companies must establish robust compliance programs to ensure ongoing adherence to the Act's requirements. This includes designating responsible individuals, implementing internal controls, and regularly auditing AI systems.
U.S. States Take Action: Illinois Leads the Way
While the EU takes a comprehensive approach, individual states in the U.S. are also beginning to address specific AI-related concerns. A notable example is Illinois, which has banned independent AI therapists. This decision reflects growing anxieties about the potential for harm in sensitive areas like mental healthcare, where human empathy and judgment are paramount.
The Illinois ban highlights the nuanced challenges of regulating AI in specialized fields, where human oversight remains crucial.
The Compliance Conundrum: Navigating Divergent Frameworks
The emergence of divergent regulatory frameworks in the EU and the U.S. creates a complex AI compliance landscape for global organizations. Companies operating in multiple jurisdictions must navigate a patchwork of rules and standards, which can be both challenging and costly.
Consider a tech company developing a new AI-powered customer service chatbot. To deploy this chatbot in both the EU and the U.S., the company must ensure that it complies with the EU AI Act's transparency requirements and the specific regulations of any U.S. states where it operates. This might involve modifying the chatbot's algorithms, implementing new data governance policies, and establishing a dedicated compliance team. For additional context, you might want to check out recent AI News covering related regulatory developments.
Building Trust: The Foundation for AI Adoption
Despite the challenges, these regulatory efforts are essential for building trust in AI. By establishing clear rules and standards, governments can help to ensure that AI systems are developed and deployed responsibly, ethically, and safely. This trust is critical for fostering broader AI adoption across industries and societies.
AI ethics and responsible innovation must be at the forefront of our efforts.
Trust is like the foundation of a building. Without a solid foundation of trust, AI cannot reach its full potential. For example, tools like ZeroGPT, an AI detector designed to verify the authenticity of content, play a role in bolstering trust by flagging potential AI-generated content. Similarly, robust AI safety protocols can mitigate risks and promote confidence in AI systems.
As AI continues to evolve, so too will the regulatory landscape. It is imperative that governments, industry leaders, and researchers work together to develop flexible and adaptive frameworks that can keep pace with the rapid pace of innovation, but always with the best interests of society in mind. The next step is to consider the role of AI in creative industries.
🎧 Listen to the Podcast
Hear us discuss this topic in more detail on our latest podcast episode: https://creators.spotify.com/pod/profile/bestaitools/episodes/International-AI-Press-Digest---August-5--2025-Major-AI-Developments-Reshaping-the-Global-Landscape-e36fcco
Keywords: AI, Artificial Intelligence, Machine Learning, OpenAI, Google DeepMind, Anthropic, Meta, DeepSeek, Open Compute Project, AI Diagnostics, EU AI Act, AI for Government, Personal Superintelligence, AI Regulation, AI Chiplets
Hashtags: #AI #ArtificialIntelligence #MachineLearning #TechNews #Innovation
For more AI insights and tool reviews, visit our website https://best-ai-tools.org, and follow us on our social media channels!
Website: https://best-ai-tools.org
X (Twitter): https://x.com/bitautor36935
Instagram: https://www.instagram.com/bestaitoolsorg
Telegram: https://t.me/BestAIToolsCommunity
Medium: https://medium.com/@bitautor.de
Spotify: https://creators.spotify.com/pod/profile/bestaitools
Facebook: https://www.facebook.com/profile.php?id=61577063078524
YouTube: https://www.youtube.com/@BitAutor