Copilot or else! (for OEMs...eventually)

I think you just identified the real issue here. People see the word "AI", and immediately unleash all of their random AI-related fears onto the topic the same way someone might unleash yesterday's Taco Bell upon their toilet. I mean, you immediately referenced an 8-year-old bad movie in order to provide some sort of "example" rather than actually trying to understand what Co-pilot even is. You're old and change is scary, we get it.

AI is incompetent mimicry. That's all it is. Recursive props iöity and statistics that gets increasingly better at mimicking, but it is utterly incapable of intelligence, reasoning or creating anything if it's own.
 
I believe Mazz was referring to the "menu" button on his keyboard.
 
AI is incompetent mimicry. That's all it is. Recursive props iöity and statistics that gets increasingly better at mimicking, but it is utterly incapable of intelligence, reasoning or creating anything if it's own.

You're just making general, vague statements about "AI". Co-pilot, first and foremost, is simply a Cortana replacement. An optional tool to potentially make it easier for some people to do certain things with their computer. Similar to how it's often now easier to search for what you want instead of trying to navigate a maze of menus, with co-pilot you can now simply give verbal commands to find things or change many settings. It's not there to write your thesis for you.


41 replies to a 4+ month old thread, damn, we must be on the cusp of a revolution.
 

I mean, to be fair, no change, no matter how inconsequential should ever be pushed on people.

The only way my computer should ever change is if I intentionally change it.

If it changes on it's own, I am going to be enraged even if that change is Jesus itself.

I mean, it is one thing if an OS goes EOL and then when I install the new one it is different. That I am Ok with. I'll still complain about feature I don't like, but at least I get it.

But to just wake up one day and have my computer be different out of nowhere. That is completely and totally unacceptable. All things should stay exactly the same unless I intentionally change them. By all means, patch bugs and security vulnerabilities, but don't fucking change shit on me without me opting in! :rage:

It's my computer. I should be in absolute control of it at all times!

Why can't these stupid tech companies learn to just leave shit the way it is and not mess with it.

That's great, you have a new product. If I want it, I'll go find it and install it and use it. If you just take it upon yourself to install it and push it on me, I am going to resist it as much as possible, and pretty much refuse to ever use it.

Try to use your position as the creator of an OS I use to push other unrelated things on me and you are automatically my enemy. If you ask me, using a product to push another product or a new feature should be illegal. That is market manipulation. That's why the DOJ sued Microsoft years ago, for using Windows to push IE.
 
Last edited:
o-pilot, first and foremost, is simply a Cortana replacement. An optional tool to potentially make it easier for some people to do certain things with their computer.
Has of now it tend to be from my experience first and foremost a open a browser google-duckduckgo something, click on the link, look at the webpage replacement.

AI is incompetent mimicry. That's all it is. Recursive props iöity and statistics that gets increasingly better at mimicking, but it is utterly incapable of intelligence, reasoning or creating anything if it's own.
The search engine you will use instead of searching with copilot will also use AI, trying to avoid weighted matrix calculation as a form of compute will probably sound very strange to your future self, a bit like the people angry at GPU existence when they launched (even if all the worst element they predicted occurred from them).

The notion it cannot create anything on its own (but that human can), sound like speculation, Claude-3 seem to have been able from the existing scientific literature come up with the same things scientific did in yet to be publish and possible to train on paper, just by raw try and error it did came up with new way to play Go, AlphaFold created a list of new prediction of how protein would fold, GraphCast come up with the meteo of the next 10 days on its own.

And the idea that the world should refuse to use ML trained translator and voice generator to speak to each other, look for defect on things and a giant list of application just because that not real reasoning or creating anything new... so ? How that different from non ML based computing exactly ?
 
Has of now it tend to be from my experience first and foremost a open a browser google-duckduckgo something, click on the link, look at the webpage replacement.

What you are talking about is what started as "Bing Chat"; part of Bing and Microsoft Edge. It differs in many clear and obvious ways from what is being integrated into Windows 11 (which is mainly what is being talked about in this thread). However, starting last year, Microsoft began trying to unify many of it's separate AI-related technologies under the "Copilot" branding.
 
What you are talking about is what started as "Bing Chat"; part of Bing and Microsoft Edge.
This is pretty much what copilot is.

If you click on the copilot button right now (or do windows+C), what you get is very similar to what bing chat was, you just do not need to open bing in edge to access it.

Maybe I do not have they very latest version of it on my windows, but still now if you tell it Change my monitor refresh rate to 120hz, it does not do it for you, it tell you how you can do it like bing chat would (seem to be only the simplest things like open notepad, I am not sure if people use it much that way versus a replacement for websearch-looking at result-clicking on the best one-reading the web, with copilot-bing chat you have the result direct in a way you asked, no ads, etc...).

What can this copilot do (the free one that come with windows) that different than bing Chat right now ? It seem sometime people talk about something, which is possible as microsoft will have market by market deployment and subgroup test in them.
 
Has of now it tend to be from my experience first and foremost a open a browser google-duckduckgo something, click on the link, look at the webpage replacement.


The search engine you will use instead of searching with copilot will also use AI, trying to avoid weighted matrix calculation as a form of compute will probably sound very strange to your future self, a bit like the people angry at GPU existence when they launched (even if all the worst element they predicted occurred from them).

The notion it cannot create anything on its own (but that human can), sound like speculation, Claude-3 seem to have been able from the existing scientific literature come up with the same things scientific did in yet to be publish and possible to train on paper, just by raw try and error it did came up with new way to play Go, AlphaFold created a list of new prediction of how protein would fold, GraphCast come up with the meteo of the next 10 days on its own.

And the idea that the world should refuse to use ML trained translator and voice generator to speak to each other, look for defect on things and a giant list of application just because that not real reasoning or creating anything new... so ? How that different from non ML based computing exactly ?

I want my desktop (and my phone) to work exactly the same way my Desktop has worked since the early oughts from now until the day I die.

Like, don't get me wrong. I'm totally on board with more powerful CPU's and GPU's but I want the UI to behave the same way, and I don't want any more automation than I had then,

If I want to search for something, I want to open a web browser, point it towards the search providers page, and type in my query. I don't ever want that to change, or to be integrated anywhere else in the OS.

If others want to play with AI/ML, that's great, let them, just keep it away from my workplace, the development of any product I buy, and mt computer and phone and we are cool.

I don't ever want to ever talk to my computer or phone. I don't ever want my phone or computer to offer me suggestions I didn't ask for, I don't ever want it to do anything automation-wise that I didn't manually set up and tell it to do, and I don't want it ever to reach out to another device on the network (LAN or WAN) without me explicitly telling it to. (or at least me explicitly setting up and configuring the automation.

I don't want my phone or computer to be always listening (or always watching) and offering to identify things for me. (or submitting that data to anyone else). I want it to be illegal to collect any data about a person for any reason, even with consent in exchange for free services, and I want there to be harsh prison sentences for those who do.

I want scraping to be illegal. I want information put on the internet (like forum posts, reviews, or pictures) to only be legal to use for the purpose the person who posted it posted it for, and nothing else, including training AI or marker research. I don't think reddit, or anyone else should have the legal authority to sell access to that content. It should exclusively belong to the users who created it. Sure, by all means, train your ML/AI on data you own, but not on anyhting gathered from the open internet that you don't have an explicit written license to use from the individual who created it.

I don't want my computer or phone to analyze my pictures for data to search for, faces to identify or anything like that. I don't want MY face digitally scanned and assigned a face print for identification by anyone, anywhere, without my explicit written consent every time, and if anyone does it without my consent I want them to go to prison.

...and I am pretty close to going postal over it.

These are all red lines for me. Red lines I think using lethal force to defend myself against those who cross them should be justifiable.

I'm not a violent man. I've never even been in a fight, but man does this shit make my blood boil.

I think we need a revolution, and I think we need those responsible for our current dystopia up against a wall. Anyone who has ever worked for Google, Facebook, Amazon, LinkedIn, any data broker. Anyone who has ever used user data for any purpose. They are enemies of the people. Anyone who thought this was OK. They need to be put down.

If you catch someone peeping in your window, you'd be justified in defending yourself, and it would be very hard to find a jury who would convict you. Big tech is doing this every day. Those behind that big tech at every level from Investors, Board, CEO, to individual programmers and even their caterers should be held accountable. They could choose to work for anyone else. They didn't have to choose evil. And they should pay for choosing evil.
 
If others want to play with AI/ML, that's great, let them, just keep it away from my workplace, the development of any product I buy, and mt computer and phone and we are cool.
But the search engine you use with your browser will use it (from the word suggestion to the result page), I do not think it will be much of a practical choice.

Let say you are the Lays company, you have an human coded algorithm that judge if the potato chips are good enough from picture taken from them, someone by using the last 10,000 results and validating good-bad chips manually come up with a ML system that let 50% less bad potato go in and cut waste by having good but judged bad chips being cut.

That what a lot of real world AI/ML will be, there will not be much complex product you buy that its total supply chain will be free of it (or made the last 5 years...), Covid vaccine was full of it for example, the petrol you put in your car that was the first industry to go really hard on this more than a decade ago, for sure your CPU design will.
 
I don't care that they are adding another key, but a 4th key in the "Prt Scr, scr lck, pause break" row would have been better I think. While they are at it, move the WIndows key up there too. I keep it disabled because it tabs you out of games.

My keybd on right side of spacebar has Alt, fn, menu(the right click context menu), ctrl

I never use that right click context button but I bet for ppl using some accessibility features it gets a lot of use.
fn appears to be a keyboard specific modifier, but I don't know what it does.
 
Last edited:
fn appears to be a keyboard specific modifier, but I don't know what it does.
fn = function, as in secondary, like back light controls, media player buttons, or to turn off the touchpad etc. its usually designated by a color(ibm uses blue iirc) or like the fujitsu i have in front of me they are in a white square to show what they are.
1711639003088.png
 
I don't care that they are adding another key, but a 4th key in the "Prt Scr, scr lck, pause break" row would have been better I think. While they are at it, move the WIndows key up there too. I keep it disabled because it tabs you out of games.

My keybd on right side of spacebar has Alt, fn, menu(the right click context menu), ctrl

I never use that right click context button but I bet for ppl using some accessibility features it gets a lot of use.
fn appears to be a keyboard specific modifier, but I don't know what it does.

They want it to be the primary way you interact with your computer, so of course they are going to put it in the most obnoxious place possible.
 
I use the windows key all the time, to start the menu so i can type the first few letters of the program i want to run.

Same.

I thought everyone did this. It is very useful. Fastest way to open programs IMHO

Way faster than going hunting and pecking with a mouse pointer.
 
https://www.tomshardware.com/pc-com...gen-ai-pcs-require-40-tops-of-npu-performance
Well this is a nice development, for security reasons, we obviously can't be uploading the majority of our documents to some cloud AI for it to regurgitate it and spit out some clarified report out of risk that the data would be used for training and leak.
But if Microsoft trains their models internally and locally we are just running their engine then that closes a significant barrier on the PIA and data security side of things.
 
Windows + X is also useful, Windows+X than u if you want to go to sleep and so on, event viewer, etc... everything tend to be 2 key aways.
 
But if Microsoft trains their models internally and locally we are just running their engine then that closes a significant barrier on the PIA and data security side of things.
Also running those datacenter with current inference cost per token must cost Microsoft a little fortune right (free bing image for example), turning all those cost back to consumer would be nice. They will talk about lower latency for the users in exchange for shifting the cost, but I am not sure high speed internet is a big part of it at the moment. They will talk about security that would be a non issue for a lot of use case for a lot of people and I am not sure they local computer would be more secure that what a company like microsoft can do.

Has those models context size expend, maybe they would not need to be trained on them anymore for the amount of data of most people, as they will all fit in context anyway.
 
Also running those datacenter with current inference cost must cost microsoft a little fortune, turning all those cost back to consumer would be nice.

Has those models context size expend, maybe they would not need to be trained on them anymore for the amount of data of most people, as they will all fit in context anyway.
That and advances in PyTorch allow for things to be split between GPU memory and system RAM (we can thank the Nvidia embargo for that feature) which greatly simplifies the deployment as Microsoft can just choose to use system RAM exclusively at that stage which removes the need to worry about how much actual VRAM is present as long as they have sufficient system RAM to facilitate their job.
 
windows+E followed by alt+d, there a lot of classicsm windows+R, or more modern like the HDR shortcut as well, the virtual desktop management, snapping windows to the side or making them full screen, etc...
 
I think that what is interesting is that a lot of the talk about copilot recently has been about a shift from copilot being primarily cloud-based to being more local. This will be potentially tied, in some way, to a minimum AI TOPS processing requirement. Thus far, the requirement has been stated as 40 TOPS. It's easy to see this becoming some kind of baseline requirement in the future (Windows 12?).

What is frustratingly difficult to find are numbers related to what AI TOPS various hardware is capable of. I'm particularly interested in what kind of TOPS existing, or even "older" hardware might be capable of. Unfortunately, almost 100% of the talk right now is an almost obsessive and singular focus on "NPUs" that are being released as part of newer CPUs. For example. Intel's new Meteor Lake NPU offers up to 10 TOPS, and AMD's new Ryzen Hawk Point platform contains an NPU with 16 TOPS. However, in the NPU discussion, I've noticed a few things:

-The focus thus far has been releasing NPUs in mobile hardware.
-When talking about the capabilities of the NPU, there is a heavy focus not only on it's TOPS, but also it's efficiency (low power consumption).
-When the TOPS output of these devices are discussed, the overall TOPS output is close to the discussed 40 TOPS output, despite the NPUs in those devices only adding 10/16 TOPS respectively. This could only be explained by the overall TOPS output including other processing power from the CPU and/or GPU.

Tensor Cores in Nvidia cards have long been talked about in the context of "AI", but whenever the subject of "Tensor Cores VS NPU" is brought up, most articles go out of their way to make an NPU seem like something new that will do the job better than a GPU can. It's difficult to dissect and separate the marketing from reality on this one.

In the announcement of the new 4000-series Super cards earlier this year, Nvidia claimed that a 4080 Super could do "836 AI TOPS" (!), presumably via the use of it's Tensor Cores.
https://nvidianews.nvidia.com/news/geforce-rtx-40-super-series

It's unknown if the way that they calculated those "TOPS" represents an "apples-to-apples" comparison with the way the NPUs mentioned earlier calcuated their "TOPS", or if they are really even comparing the same thing. But if the number is true, then it would seem like anything with even an older first-generation RTX GPU should be able to easily meet the 40 TOPS requirment. It would basically make an NPU irrelevent on any system with a dedicated newer GPU, which I supose explains the NPU focus on mobile hardware and low power consumption.

But there are still so many unanswered questions.
-How many TOPS can a CPU contribute and does this scale infinitly with cores? (this might change the equation when talking about something like the 7800X3D vs 7950X3D, or maybe even finally give those E-Cores on the newer Intel CPUs a reason to exist)
-Can GPUs contribute toward the TOPS requirment in ways that don't involve Tensor Cores? Using CUDA for example? (this might be a big deal if you still have a decent card like the 1080 Ti or 1660 Ti that doesn't have Tensor Cores).
-Can Co-Pilot, when run locally, actually utilize multiple different types of processors simultaneously? (NPU + GPU Tensor Cores + CPU? + CUDA?)

Personally i'm looking forward to this getting past the AI marketing-masturbation stage and to the point where we can get some actual facts.
 
What is frustratingly difficult to find are numbers related to what AI TOPS various hardware is capable of.

One of the issue is that it depend on the inference model precision, 4bits-8bits-16bits will have on some hardware different value has some take advantage others don't.

If they mean 8bits int TOPS, to give clue of older hardware capability:

https://images.nvidia.com/aem-dam/e...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf
a 2080 could do around 160 on the tensor cores.

I think you can use regular CUDA core no problem, they will not be just as efficient at it.

It would basically make an NPU irrelevent on any system with a dedicated newer GPU, which I supose explains the NPU focus on mobile hardware and low power consumption.
Yes I think that for sure, at least for things that some latency between the result and the CPU does not matter, this is not for people with a 2060 or high nvidia card that can throw hundreds of watt to the problem but more for Laptop without a dedicated GPU I presume or for task where the GPU is quite busy already..
 
Last edited:
-Can Co-Pilot, when run locally, actually utilize multiple different types of processors simultaneously? (NPU + GPU Tensor Cores + CPU? + CUDA?)
From my very limited understanding it would be quite the challenge memory wise to work on the same context doing inference and for a lot of task the previous token is an input for the next calculation, parallelism making more sense to handle many different requests at the same time.

What is more possible is different AI task running at the same time on the different device, modern CPU can often be strong enough to do text to voice in real time which free ressource to do say the text to video part for the GPU and maybe the NPU is the one that come up with said text, every thing work if you miss one part just slower.
 
From my very limited understanding it would be quite the challenge memory wise to work on the same context doing inference and for a lot of task the previous token is an input for the next calculation, parallelism making more sense to handle many different requests at the same time.

What is more possible is different AI task running at the same time on the different device, modern CPU can often be strong enough to do text to voice in real time which free ressource to do say the text to video part for the GPU and maybe the NPU is the one that come up with said text, every thing work if you miss one part just slower.

I agree that it would be quite the feat if they are able to distribute the workload to different processors in a way that actually works correctly.

But I keep seeing articles like this:
https://wccftech.com/intel-outlines...ce-minimum-requirement-windows-copilot-ai-pc/

When it quotes "Total AI TOPS" for each, it clearly includes more than just the NPU. Since this article was written in the context of Copilot, that left me to wonder if it's only the NPU TOPs that would be relevant for copilot (in this case less than half of the total TOPS on each platform that has an NPU) or is it more about the total? And would the 40 TOPS "requirement" be for the total, NPU only, or what?
 
But I keep seeing articles like this:
I think they are very misleading, for example:

Running Microsoft Copilot Locally Would Require At least 40 AI TOPs


From the official documentation, they always prudently always talk run part of the copilot locally (we can imagine stuff like changing your PC setting for example, text to speech, the easiest part), even in the article running copilot locally become: just like they will run Copilot with more elements of Copilot running locally on the client.

Total AI TOPS" for each, it clearly includes more than just the NPU.
For those APU affair, NPU-CPU-iGPU all use the same memory pool, which could make the idea of working together quite easier here.

The software move so fast in that field, not sure even them know exactly what it will be once it launch.
 
I was wondering what MS-CP was since I seen it popup on my Win11 systems, but I just disabled it without looking into it.
If MS want me to use their AI, then they better make it unrestricted, smart, powerful and let me train it to be my dirty little submissive slut that can also generate images and sort my files.

View: https://www.youtube.com/watch?v=8SCCihCUI4U
 

The amusing part is, I searched for this article on my work machine using Bing (I don't usually use bing) in Edge (I don't usually use Bing, but I also don't usually browse anything on my work machine) and apparently now Co-Pilot summarizes articles for you when you search for them in Bing.

So I - without asking for it - got a co-pilot summary of an article about congress banning the use of co-pilot among staffers.
 
Becoming angry about co-pilot seems odd to me. It's a natural progression. Are you also angry that your OS has built-in local search? No one is forcing you to use it, and I have seen zero evidence of copilot using any relevant amount of CPU resources in the background even if you're not using it. Keep in mind that I have Windows 11 running on some very old, slow computers, where excess CPU usage would be very easily noticeable.
Of course not, but you don't need an AI algorithm for local search. That is what indexing is for on Windows.

What I do take issue with is internet search being the default on Windows 11. It needs to be opt-in, not opt-out, just like it should be with Copilot. Microsoft can make it one of the stupid questions they ask you when finalizing Windows setup if they're afraid people will miss or overlook it.
 
at least on w10 (dont have w11 handy at the mo), copilot is now removable in apps/features. not sure when that changed, just noticed it....
 
Of course not, but you don't need an AI algorithm for local search. That is what indexing is for on Windows.

What I do take issue with is internet search being the default on Windows 11. It needs to be opt-in, not opt-out, just like it should be with Copilot. Microsoft can make it one of the stupid questions they ask you when finalizing Windows setup if they're afraid people will miss or overlook it.
I think it’s a placement problem, the search box still does search without all the AI stuff but now it also does all the AI stuff. So you don’t know when you are or aren’t doing something with CoPilot when you use the box.
I’m the enterprise CoPilot preview it only works in Edge, so when you click the CoPilot button it opens a browser tab and snaps it to the right. And leaves the remaining 2/3’rds of your screen for what ever else you had going on.
 
Have a default or recommended baseline setup and let users flip things on or off and everyone (or close enough) would be content. The majority seem to find the "convenience" of having updates, features, or whatever else forced upon them to be preferable.
 
Back
Top